var/home/core/zuul-output/0000755000175000017500000000000015134167370014534 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134203073015467 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000357170415134202757020275 0ustar corecoreqikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD P ?,Eڤ펯_ˎ6Ϸ7+%f?長ox[o8W5N!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66cX.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(}}@$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ };XHK:lL4Aْ .vqHP"P.dTrcD Yjz_aL_8};\N<:R€ N0RQ⚮FkeZ׃Vق?<6jHSJ Jno#ˏl_}z?1:N3cl.:f 3 JJ5Z|&הԟ,Tصp&NI%`t3Vi=Ob㸵2*3d*mQ%"h+ "f "D(~~moH|E3*46$Ag4aX)Ǜƾ9U Ӆ^};ڲ7J9@ kV%g>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&N{g6R/wD_tՄ.F+HP'AE; J j"b~Y]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI(\gZ8@v?>c2Տx$O$pSGhwUCE﵅KVMٞM9$1#HR1(?6x]mD@0ngd6#eMy"[ ^Q $[d8  i#i8YlsI!2(ȐP'3ޜb6xo^fmIx nf^Lw>"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fG#stV \'zMgaSZNg8>e!^f%cYr]qs:"̊;isXa]d+"v=x7p.fZCg_Ys;pE&\U}ܫSh])qKYAـhhdEnU14&G * QIQs;rԩ.k83֖8Muqu_48dHܥlWW q>fu6+'}xu\Veelz`Zbym gp8펠ˋ_ֆ:1IC8qٞZvXçL ]X/r}7O}Wh,h ;RQ=]uǺ00yiC۔ʫN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/V/^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCUO_8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNG\Q.pPO @:Sg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨwtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h ob9%islԃ)Hc`ebw|Ī Zg_0FRYeO:F)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rTm0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.yu~9x۾vS;kN?WƟ+fx3Su[QqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3qM]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]_5~NW?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Y[fnY𱹞M}{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{~Ҥ):d8h\y6Ct3T7IUV:;.1& ,5΀j:<< +Y?58In'bXIǣO{&V\DŽ0,9f O_"[l:h¢8wݓ19\:f6:+ .3}=uvKc ٹeS<>ij(o'ciS<{1$E[nP b?8E'xv[K+E{,Qƙ1*dcs_Z'207|qBOgYU|U--sG8`u! qGYܷ;RTHZd¡lY}1R/[?)xx 찤Q!b%U=(Kb4 1\)y$!M饸+ wcV?C)MΈ^RNi?u3Np> x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?QC{#؟\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o}w&PzDgi xs  xh\L r Ѥo Zt(I >~|$>tnMdэoV#ہll/ؽnA(ȱbAj>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$ėƩwmmK/p֖Gu,[8U'R n )K [蜓ؼHuOOO04Iw E""?dKYFC)JYpO?T5(y80?T]x@;*DͮCS-q"E2-nz<"UWASz}lLe=̊ۓSyOO"0n"kV Ld샵Pb7y@V3ġuXݔH~ 7͌7ʔq k"/)4mŚV&DE<BHEjy"G%xUhRL s1 (,E>2a/QoXƣy]FE('5rҮ%X몖d(%9^񓥃&[R]ZT_BK? fΫh`ޒE\~$MQȊ nĝw?>s55@* Ohh *Ig~\=`RGx!v@QP\:Bk;Kxy4";=Սn=QxV6 cJ. N[A$,2hDVy)bb)g:nHIMaHt*LIZ4 tx0&5wf[KWi|s<2ziHxi%VCNJxz HT;ZoowpⴡmQ}1b `|w \ϥȧ5xnH6 0d4@˴GM_4Bt\w&L3@-DE>MIPL}jmhiK9/jy_Nჳ- P! eQw 햦ݷ8d;cO/E%ꆧ?]d.bwy-=T<$]TS,^Z=I;&Y번ﻋ}gwv}Baܟ{/Wz~2mJ@1 lG+0v`Ǯ><#QtjvV ,1p#JHw}~Brew+{ !b$YCTb)ϗ N&aTMY6 5ʔ B4O~U᠁bT޾bYJ` BbHN?b7ALnP( G;+ :mMC(|*qws%; 9忟aJ@^mb¡WoTvi) (mB+~C6\q\>04(*pXB6 H}Ze*fX`o]BBqQ<|Cf! O|U`0ǰ( x E(A)`ˮwGHJ65ϓֺ-W4(l—vR1 O]#t_ҵ%m{;.Q0/|x=4EG$jZW"Қ\X`#iX@Ͽt=ݼo> x|* /ŲLF--W4ɒЉ6U<1Aťȵ+̋($2wZtID(0h8GoR1m=&`KpȢ4]<Б8-^IYI&$"Baj(_ozõ&g44LM{lT૧."mGA{\-jeLԄ#D+X#)چhru$L!i:RLhK&ҲU]+djZchQ>[+pv5_#?'!pts!RbJJ/s|:g:"Eu{wEq+ 1S2~S鲧ҕpwxF3oKoO i,[@K|_m Dy&dCXK0Q,5e3+f;ևpKeƇy8[A<"jcL ;jYTwDvT Kuo|vmMDq; N vռ - D"hf3Pڜ=͕qC`?VdOe*qQB!n<85@OO9 t$Vن%zTR6Yg"&})W#Pm %1tIYgZD0^<ɨ\r]b&VW `l9SY͓\oE NGK4pZ\mՉtm./zM1*,TW>23IMCCXPv/"RWް"Z}SZA,hvW}] 斸]mAmy)U, zoLQ\UV*L*yuZWDr{DM'HjDmؑhWy=ݦx-]Kjvs[(2MX}HPSV&L5{o>>7n-M럴a-챮ql+sn:e9L&㘻GľFzoƜ WXV[P تZ2 7 ! _ǹJ& r.P&_լCZ7kI*-pC$(ùkUxןin+%m+z]ʖHS[OˆIsKbGO&*-+zƵƇ}vHK]E cUfOvU[ɋ<r^U} cKKEߪh@Ti%֫ _M=@`jRaKb }0\6i:DWae6'7Qp|QP;aC{3z{= @ɥ*  ,?I0@y律cyI$4=k$&6Zw7]C[fu. .op#GBjx RjVn&pM]^r.2QVy-G?K^nZ6mj>PUHNe7*O%E'l3==莭& J+R9t&0if2HO9T~09n= l9Z%r~&k#P|DuT<|5g͵lG&8 [fh|6}BiW1go=^L (L>Bs(biBŗ̵̽> `ŕޓDzXg s_Q{I{2^ 9My{5.GiF3O0`o=ɼe~D6y'WwO2~#vfc<o  e~`{^1`xIks}>pp*};x}nG{2H]'z=ĚDro_aݡ'*sp׿PSy}Wo!W gV '*ƏDЫaîCP\1MA_p_C8-׉I2+wAPgp xv/x7vԏ,L'qIyڳ"⾧\C{b~!i_~^Z)䡁8fS,)%*RZĄu9eE L42)'?mfY+łAKD"%2s@B**o 1FZ RhB}Io|vA>M#j髹V\ DVځ,Ni@~M(_KX?`^_TAO%Nz7bAprΊJb.a` pmFP%noX?.LztOVCG|*oy⸸Ƴd!ӟ1oSt  K܃WwUo1WN=L;t^iLN2 `DnNkX3r=VB'`UJxN8m$eE&瓄,[YfW)S^8jgq-@2"~08%9KwpJ YkjۡaN6,ve=X׺TmzoCʳ1]F;ye0mnmy54jPfJaQyc=fso wQk Nf}ў(ޢe*!BVcC6e1虁Ɨi5̋+ljDMqi_q' 冓`vFA\ː.MQ(ԲE\XaIaQYޥUi[ Cf†l!g ^gc&pYi{k IJ/.^İso'W J9U`E$@ʬE|#.L>ԐIgurxYQ?-(m3Y*0HKiH1N*G]0^?{z1vm س09 B^H i%Jw N(}e넲eʞF({$lBuBw'~# w Y'فPgwBj_P*!)fF+̷1uqSAy~K$4^q >"D^};78Q\Hjj (ViJH{MfNqmO0B3Gj@X{sُ-O ԜHᣯ2{j *D)qVpÓDEΆ[gP:eb?~A=c^$oaN7| m1˚A{2e45Fܲ]({3ص Z| ;ne15D(Pj.PS ߁2t)H2y\o1#kH]hm)\ GY~(B[ dtL1X 4.XXH|D߻> ʈB0EO: t9ڼAW Q?\,ͲV WŰ{5Д[)hDi}齄t MGĬyT$YRflxrva oNEhwdY-Y s-UOjsuki7g 'sI%m*Du\hid|*/^ړzd-.ID:K kS w?"CC;"Fx|8]D ҟ;R(BxLBEKNdW2NBLa$ O۰`( hH<24ݩ@IMfiJDÔO ZFlx&إH}-2b|j-{`P|kU0gŔKOE:k5+Y``L ؅ ao k@iJG`~+-qVX>NfD63,/r_d$J|Hׯ%xrPҥL'S1&Lje<_&"FY!HgLSX`\a6FWz49M%=T}9<_<=ײ~(7G}`LYxJ]~~ӲZu^Y?|zxaֻÞ)l]{ԡ9H" C4UlǺ$ƺd=Y/mG!q$ƷQF/{:3XGb 8JSaY* zZS^^+B+V6‰|=)KSrhC9_!@1t^!R]$%zYR>4݁Ҹɱ;SC+y2?>QNSM_/Ru(f r&!V4ODAjxGT缂`dQxuPt wރM9|0g*qŴvaMǧ[fH!ə%ʖ:t=G/>Q+\ Bį,pm-:K6p{\㞃[P.g{焠^ko tޥ5Ť& !~[GY@(vad_2QR-Zr/`ΐIД,ͻ/&"-aG\t`=\K`ůMT촬!$(VACk$TQ'$8"ćb;VVOh bZ`SZ&1%<^7u~RsmcfAV$=@UR${Q/ƍfXn9=h_$Z0kjH#<93fr؂zSPsM1r @ I0H'{2w߲`m$bDٻ) " S$>{ g咏Hrɇ%f kwR#Ԭ|.ۅ3]*~mid^-MpLtw9pq"\N@O1d98cAKWh8R$\=qg'b<+_UyUq 8-䴤h#+_"WEWNm /J  Wʓ w.:>%q.7c083hȥYHlŠ0UH:tρO3BEۘi*FS[05&)JSDIwX,ߤwl0ФAP6G?} H/kCe+|[6h]0t5,f 4)b.N.|%-B oT^]IkŁ5ێzFNM~d \@J@Z%tBym9Jp0~E/kuS`'WjrGJ4"Q]Ѐyt+%#I87ܷVz F !ZԊo ͍P$N5oGs+~ڥu`"52:(JH7/.8޽R3ezA8Z[4a1@1LM8eCt=(&4:GRQwaV -?6Ń0gAHIlx$q6W6yNj$Mdjt%'ײg'WAmŇ7)I)f%=ob+AuW6`J!OHar"f*ǯD߰'uhW}Á⒧j(ck v//YBE?To)с AI"CNYOLYIߏ6-~09W{I@cT뭎J{`F8@sVVS?tR_Ppbi&M켍pXE-M- sA!hQ'zҝ\ wܨÌy0y\ZP#P6VOͭU,8NE^K>IYvr`g=A=h;O&)8s1&7OGK|U9oLʱ##Q.83H RX$6W#n fl]5HGr/)'k<=++^#$ͦU:(S9yJs}Uš*?7ꢌMR 3JuxcYȔuv /fj RJ6҉kq[],i$ݽ~gZ3-Y;)bsp'Y; j7#v}9eLsz|VN{q$M{|?]yo fvgup0y{EXC9_9pqr7e專Àh)UW̓+8Wp⬼2#8K3 ~v$jsP 6<_B(2~_8"鑓m\,kSl /ý}JNKel+ *||dթŘ/&]DQיHhffgseI ]L}_i %OR{=x)CHٺIAqVzz Ck8ɾwEK9zrigƬ^bR?h|,8N8_MiBYf4ur0?<{ a2ZEM!N?QO{Ɓc̞'!SDM- *N\l-u`1ɳ l~&w;}$<&o-.NpPʃ'WG3G&ɻKk#?E§z*1Y&U0|9D2HP\%[YrE1!+aZT&9(Ny(T}rhd^ю+ 5 QܠNjs^10bļtJUwVjktN/(UҕRYBB?y[,T=G 7j;٨Fđ GS/8/-Z}Ve`1e0$*qOVBٶ@9{IoE Ϫj Itsm;=gu4`Ijĥ^I,-iyس8`ŝ1 6ܰ7Q֎#MذU fk4ٚ"߆/,8pskkyJdQM`{E Ҳ`PhJ^~`($4˳o姀w2|eY E}TB6%묒E)Xdww`48c^nd+ҕɩꩣNI{+,8NNVE -'<\DŽL%iڵ1>ec.=&mVabYH1Ks4(u/TRK:G/vi9Q9;¯]sFWPlczͪ6S9~R*3k R @q(Q&)J `M7=wJicq3o8 쒃K %[D.%R*$L0M贜B\({yqmM2wGk()n{i,OZp){cGWnBٕ+z\8 {;/(oD8ت#rQ<O1z"1Ksz jo.p|FYEogy[_q~e;~͛mu^q1'ZjUui nL Mw I*xg^W -AłE_a^p3^>N_T~\ Ejz,v?X=YD c~u~qjH0c|ZK64%XZ:sW:7_O/3R]sO+A͂5sj# \aw+Y7י<З ϻvOzm?@[QtVtkGGown `Oq^/KL23@L)?{@];,Ny4QW'"+_U'.v=޺Eԅ2ଢ଼'Ul ƨP:2>+vn>:}}H+P{ױKMk ۏ(R_np?ќM*zr(x= pD|Uч[t!D:=lzdחalyc5uǛzdy+,>o5q\24( ӨcFO8]Y} ̫f1O}/՞05g<:/|9-05\:MfӦ/+[Ou.Nuꩻ_:w% ^Z^4EfX:j>\֏K>AX` |D $\N,l%Hxq6Q/EHϫc:cCģD䠟sq0{ŝƒzリ8o(t}@CLܠD: LqW"W)'̛*ī(vO$Tcx(Ό ND7 \Зe~!]I0({n-1+ebhZ7'❖eW~j 'ipp!؅?[[CkJgWW()Rվ)۽r|Y8jǕ>J<ȗw\yq[?jlt܍k0LɊW&q:9,ώ݈i9rd@dο{Mv8}f >v?{Cු^ELΛUbz%IMbz5q mbz%i@MPZ u#|q$?=n.@NĖLz%Pw]P ZePgFQÒ5*X{`D&(- ֔l*f=Z& s,X"O<u4%EA m KanՆ{Qm-9n̒4܂>nRqv6Ějp[7GaEn6(Jh& JJ av7RZ5xQBo#[[h٤C A3\rΛhҀ,F&,xsĕ1?m#9c X ,_WiʹnD9Oݬ_0n_wyv/Sqe!,VϽ?c8 g]ѱHsJ0ՆZs eS)bT+%ekUvu,*=Qp=AE2= XB*VYF 2)ܸRi7 7"L/ D#B$QiF1KʼnbgZb~6ɂomH6E <|ga7q mc77}Ocq1$;I[#_%R"yu,"EZo껋reВh 3`<{ ChQ+?u_oum{V0!`_\Mi3n9'w+/ۉ2守ּtO^R!?9<`zzT$Ûp`zth <`]jxi ypmG_'T"rݶBA`V{;͗m^(gCVi3) q! pM݇ x]kb_#l5^wPBA+/Yq4E|/fh*v;7<5~U!ޠUN`֜ܝ18i؋U ^!ɨ|60Ly0|`D`? {S$jgк0钃>q~6sn|[&s8s1Uѱvh~@j16ǝѩfpU_y ?QJR8BLUu;,P>OwE?*G/O~=|t޻wn)mm/ v˷6v/Dao>zWeugn;"]>ո/'\X&;Jӫ~wVoԮ:43TSBMqLuHHPOĦkN y岺m|m؍;G%5A/{;mц;C߽@MNyć+yz?-@3fP2=riǥm?Oݱ.)#y[NF|a~wY% W"7NGgjxjI 7ļ$k2+^ZiX] qǍpΦ80=ىxAg!cJ_: oksOU0;Fq_;j+1( rFy@IWF=xay5}4r\,ʋ$ZrLXx6'FpCR=Ï198" Q\HE駿Z>ܤ9:quc6Uibk _y֡ @@-򨣐^C?ߴ+9G:*}j/\:j9|5ߗrxÕYh\%5 2@ kIkrŌ,qZ7U PNX3_V߂ugH8W^Rk$H)U7VRKYSGj1 !`ׂO ,TFk2ݚ@u/:WcM6ܔW(Vi\g4òԕc.B6+ۉg] Mÿ@ nuw~䷙f0g] } ~ o)ٺmBc(,:DO \K_EQ>u_kMOŸ!4mg5S"zQ;\!v!MĥV&? w\M`w\Q\v\QZMW8(&Y'Whe`l%ȴٳ1|yT6YBP[헵WX tQ@l9}8{:}̘D*eJ%W,I0J9Er%PĂ! ,Ji2c'扻΁"T܃nALZJ8(Ťi$C!ͥ)ΑRȤ J(̧$y+W7NO;ȶl X&،1r<#7IT%QIKTd%'2z m>O9(iPO}'@ 4S{"(hAULl6qE2lԅet#s}crmFwʙ mmtC"%O59TIvIՑ{I* fwbQHrSVE0b72{r ![b@ȞGcVBwǭ4"ȈFeUrC EXL6,<*/7Q4^*[_Ѝ|{(ZO؉O*n0&rT8,] .{ ߃Vb.~+[YZÑҥFʵ'uڐ}ⰽ8Z{9^ 2^9w!R>E!$ A>AY+ L hm ;' c4R"ZDVx.M,2e(LV3*ElϋB7;2`C?EQJe4-N\:ogܾwCSC!+wS#FP wמny3Am߃DCكzrPb?`WCY^0FQ(ţU*xzEȎ;0cBOA3gnʁ d9๱>LTdɀҥt'r-r1$#S؊}IY5:$I9Ipu;%Z=Ri<1+n;8;In2?E.汈ϫ]`"aF=np-Pd7L=>R|Hl"h{ 9Oa}כXxKQHZ B`5fe "ZYn$@L^;E7V=skVeɌ>ɘ3Lrʼn<&_+vZ;!hڎUJ>x@nyE9Ç8 *YyTZhaKVhf"X# YR6d\ٓ1  MV—Z"zd{XV؃1؃n>=*e$?GG.h&?" anG`K8f@{PhGe$l0r(!FSmhN0uoٻT{צ\+F[z4heAksi CfŜ-xK<? J##Ul&*G?]ۓ]t\v)qT)э{Pu/CYB Z%Ek[ذC6YqSj`I3i>-NOZ.gF7GT~(d}($Y@7PaMOkh_&$[A[x@!?:Wղd.E^mFwTsCeZI(b~nŐk3B9q _ծ/IMy~NCJU}}1Ie],E5ey>mfp Ixy)8G|'pq5Q2 „ \%]4FFJ2zm-&Z=Q|޿ZG6`0Rv$4aAfi?ܞn08PIj1 /s,oua  )`Q1wS 﫢L=Dnp [g.Z 򰤡E>IJhy`e1(Y/)3Dť"O{Өt,ͧtwQx7hwWB8bjh9מ 'ZFPV3Ə0BʑCJ]E_Bib8A{ 쫩V'\;6}npm=Z52RD tG߼s6f'4E =qCA`ٸ3n>~un+5UeII ZIH$7+gOd0MsR:I-҈'fN궯|,sΪEc[ ; izBޔƙ#!\nmuT-_Gb<]J.'&kɣB?' <=?OVkbҨd ]hT"r+N& mtl5ZGt100Goe"AFˎ&o-O47Ym֦G?D?mN{WS?uE eA?o҆-]DXu)ʏw\rƙUV7Z2w 狳ϒwTэZyDŽS'$CE/^Sx㣿#ܳjZKZ*fSzE೦؊q6k͗EYu:J97)> >!CSHt*x0G7wb9'-k~a6PRpkX`K7~g;[ѿu aV>Ԩ|.Wk*9'y-Lƃ%7*7?\_Ҳ**Gj$98>G[O^} w?rzjF }e~{P_^S?1 $K2#wv~vyzX*r,5*j25rp3 ޅ[OΧ_WqS6\^:^n5.=Nb_H7fx2Vx4" F}LŽeUCf7r/Er QujU\|*6G|*sV=ITXVCi@e}Ey%u> =fQP'Q$1q8It/2V1۫m7j{Cl}GW=jj4Ŭcb:_Dngpr )=G8npEݤ;EKvXq?5ϰX(F>5X Shh!o8,nF{iK%@_в8)V DfE ;OPNJԈ(Ԉ?'U6B͗[;.tQ.Yfb|}'5L׍lдb& y~JJw_9gUVq*rؐz#,eV w,_8u'PGWFrCVÐ..TuvS|]8ޞM,kSZKf"ԡ+_9׾lG#`sd|oHٟڕ.}ޚOpK_.R>~8Vtuw1i%zmɪK8IT('вw' &mmϡQ7[ ft<^Tb{j}BW?M.W8mnJTj-Bˋ [;XU'1״vwVsOiǖ̅˔ %oaZlx:Bkx5;ޡ{s>[=Ɯ# fpe| U*|рC x_uӍlɬR`5k"W'7/iPk9dzYuǮ0V3yŖfv{L0it>MVm3hɷU\Mէ B@lE |]8?,N^.94yu' gW0H *ǑhX]l⸹#luXC.ÇK(E2$P}K֕@ՃFɪ^ݻ2(^4:ƐoRMt//7^|k"b4?hK8FV%hqy<ō  ,z}5}{ߴQKܣUv[jRf'' Gīy_KsqϚFCZˏʐ>l'{#2D8+32#xs5urd^}$^ƧDhFV wY67馩jFsW㲟Ho G&GZ6er@tGVj`>qOqT*k܁rlK| IgH>Rn½7/뿖 ,JEv<0fydG-y+ɕLQZ4Q7ѣyMGN,.0h&P輏"Dh`2#څDs:fs%TljGA%" 4rIXPr.3s%S D3a~JK$Tf6H> ˥L\b#w; * -ր r8?=]GNrV*D/@cVs2A NgWoe.IG3^Z@wdvE4) dy~S(!kiG`) - )]rgh 1AkaZh ^*vZqa Ǥ ,(y 9)<ԪA:e$c y,/ MZk`^R)'U[@+8X -n:]t\ϻ,􋮿L[63Jb% PWC\ TQBzf33(#`- ۳,:Ze ^kqQ9-xN b fl( WpۺI y.R%BB7vaosm䤓].0eD;pIDХHOO\4뛶U3* ԰_}EhN)s\*zUh.ZQ A [^ڢRRȤ࣪qјj T,P~ xLUڰ b"GDz#׻i,RFh\+ kҮU2g(hI6ܓt1ÁHJ5Fm t-7||dӮAْN%T_/%q@ N (tĴàe B^3_^bM F; `>Q8O=FM&XOiQpuJHCp&In0i07(kf}!JpU(/ZW!bm 10PJ&8~`Ml$tip*WQ d#- R G˭^ueu7Va+`7Lf3t]wd%NJ:;wǎMWZu M) odNi9Hal hm?_,~9Z^φCzqnx\{\[']@)<~[xb:+t쓏NtQL0ZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtjL(tL1P\ǎ&{qg+M1ud<ǘN*[LtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLtZLthLcu#8&<:c:l1g7RN鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N鴘N1l?˛~U'|՛b^@.Kc+1nt2IܑU{# !ٱaЋOfo @^A?_.VRfk<^? v=_ gvaPzK&gǿb Qgx2RJ}f]ǭ?)tj8Zm}㫼UolOkۏD\xAw:Ch\Rk`sH#ں۔M\X-W(oqWq# UkݱЕ骢s+bǯ-Vgr~w/ֵ,I ̿8ŋ竴Yt7v υ0[$K9REϼu|5_n֧~N}Q^Ygf?/\VM *oO:׷WI$'T ( qn-{- 2z% |]xB~FK'Ug,W T'cg3~EX{6}y}sEoN=fL6s]SӃ]-y-Nev=J2볫d0`_50p`bdߛOfkǯ﷼Moo|M+ד{/iBfL/ڽm⧷À֝_@狎K|S6ssSlS>)p_nWwOvE]:?_=w+PSf{G 1!^pgU *k}ůIs\7ގ߰u(' !m.B8˛cr"Tϻ|Q7/SX}95!I!Qݳ̃>Vپ> :}`va ׋ s%KƷA5(6#J~^z?{^uqGquRБws>>ŗ9iwW3(N"qYX< 2V[nq(CȢ#R猢.Z+ʮ>gL["([vv\ݣl@7ۺU%S}D9"s]P=H^uK/}k{7Am>wڽKur+~G.WˊxES_ʖx+H'y'q|X1~2N?h_˟+O|k)t!E"^l:qЃ::^ൽ;E?bdt4!{3#8M\vz?tceVf/BmHQ! p {!=*{w"ޓrcQa&.i{ 9|9~%MO=ȧpŘU[20ȫfBm5quK~QFkHУIpX8-nZn/=-}pS`aJ'-'Kqs!r{e7 C[~ȍ~qZ`}1B x?]6>ukhr4+/Uǐ|Tk}-18_,ָN_YXX_͇^Gԅg ԲxZ8*}[qxeu v(T8vycqJvWޭ?}pk.[3vl6vS\?!=8韕7ӓL}^yk;"&EVe=kFrE0ጤwX .wF9~ܗ1j9I{))C}XxgUduuu[^߷w'Ue6-x߭bZSJ˥*[ ~U9RO]o?@*m6&noY~\ S=&7rĩs~mHûu"~=9O|'ٮoE~rkbh.N6WHNIl+ o\wf_l&1w-_}_{XoeMͿoHN}zC:?|ɛC7xl וѴ~KW!oȦuC}ى(uNj߭C $m< _%%1[B=-K񦚥L+*{[l/?vro`{b}מN䬛&f= al6L/X_ 8FG/eY`8R-_enי_w!m)yt]oo7ۓ|i,g/ysnset0-9_Hڹ`CjI['-r6B+2)cIeLd).7eb=Ol\ Vt0[۟r5ߧb-ִ's%&Lsz*23կvڇm~ux8ǟ֩|Em)< Tq"e[H! }Lm q}ݘ:ޅNj(ApOD cHp%e J^zwZs {9.Y{ Zd Dܬ~Z_RaaK0V3(*$d)8ap˱/C*#9ϗx I)0Ƹ㤁ssM1G䫦lX sXk-h1ė$,P㭥hSjQ}ˇZ/kc} B= H$I@=1Ɓd|0fYVExiԆD'T%hF0| (`f9C] MuOUz{ެ7s29H )|yY~8X}xd˅).׎(qDjI`^,-4ig9G8,GUxv\ DMh"A7U% =16Xbc.*մn06kӐη`FH_&x`օ ms#l(N挷ѯHk]ZH57DB_fU2bր\f^Es>U*߿څ^#V7-!7Y T쐏sE q9of\*9X B@i(]jvё)_}б!hD@wB2 "zfv5Ҫԟ7۴ogۋYϞ*RMrV%7րJ9Mo{*KLW}&t{l~P8m̤+d6`=<1L uC$q=i7qo}RBAb`pHFEc䖴سrxO5EIu`0oLPz[ˇ7ɦ3[1pcLDhq:RBr]R0f.06];Q>tPJ rFdtx;ZسrxOK}Y׷iMaUUfyPfi/Mĺz,p{04K;[M  f巋)fĿ=[0;\/W#2=j&O$ެ^BQ J7d<!BdK0Hsh6Yg'tI<&8;`T#.x"ޫ$%ԃ$EIq%mqn"hєgxO:\=~?~y1xFń[hL8&l̀a>w²:|-I6W\d %V̎ 9@֓2ғI9pB)ʈu6b|j[ṭ rw@#༰\Œhc"lZcN+"xƊ3yTxOZ:{וD lAEJZs8xe!*<uỸ rVѬ]L Nu Jj`o:QcH ڧDh(,n2巟j):XF7 ':3(Ut$H As9Ѩ,( /˥&NRAJvt:&J4Lj#&fޗF&t30S( e jx'0`%}Ѩ`+ 0MpT5rk}Z~Un1j=Gd]-"^(E-g͕^Q_k<^[AV^9UV^-*c'uNˆqX6 p۲5c|]9u.C^Bp^x؂:-ln]#7w^i}@CV ^5p۪:[&݂a0 n5Md6#lZ2p-kx5^da7_Ns⡀1񜻾}=rztkT#LCHB24qf9͇+xq0^TdQ2Sk%Ggޤ[&cFe8 1q)ւkE]ZDJYPH>WR&<̗{ڎKKBk FccxXN=f̓snO>^#nTʹИr p~ j<8*f5#A2 _J)D`Btޛ#X;OPY<L=Bv BB>И@gxEQtN'DS+'6@ļ\*ϛv4Tfq* '-/0i;RSvѬhWWTfcΗLh;֟fN||cĄG? k_$Nx,eR)^RD0(%Xk)4FaL!F}#p4bx=go67]&"ܰ);`nQgwH>^)V$Kڬ{xwȪ!#O]r7]Ow~p{^Y-Fʠ L8{Nc۔΁cJ;jޮ+y. Y;Ftƨ U^ akkCҎt⟢s Vˬ^HA׮1I)~h A$Ҍʓ[l$nI`гB&g!YCjFx]\hk|)'qApNQ C$xE`ޯy m^Ie"I3 ,s'K8}]xY3 +8w`A>y7c/3mb>j#  VÙGwx*`kuoEnwZX>)e>UDe\(a'^xb*[yiL{r׊f{O);qWu>E>݉NjiRY(#H "kP->#wȈax-xlQ/Z蛍o9)G<`{L'hjEM1YS.(vjF(;XRe7,ݦ;\Bc|?l:ؽ*X[0>9/}.WVӶtw4jłT,(,WlsS}LIxh}H eAdғY.7EKh3@]&Zb9K1`3\y|tS<ں2Uh )`.qp!)d=wu1]x_J9)=kELE-IExk uQ Z8r$СPT䅄M0簉 kt^Y9rw?b)QGL8'6Dɱ3yJ{%Xi⒌D>_7{({< x_I9:zMx^?3Un>x2X7pDҔ=$KvrTu6Gd`i 0!v|{?ܯI pu)Lbr-gת=s);c(漶2 6,1d jh/ۂ,er oИ$u`{(sLG'2i+tZ-4R~=0ߙ%w2@\.zn]Cw\3}]zSyTx_J>2PӪ@eϪxpa.9FiA+&R\K! Kia՚Ш:Qr7"?%X'$Q~0WXjY KZhԮ"GUDd#G]sU*6xb`Ԁ=E^B<r"u RJt"!.Ri[U}l#P4mx_n ^%:[Tvۺ&8 ل}zZ.V؁טS0Ž\(1BcQǰZ#d$0d+mzyw3wF)*#y"Yb*.2QW=Rf1;1"ыHH24}+k"ySI'ʗg)(?T`c˫swlyX('4Q% @xOr$%׆+T+&$0Bܢ& }Yd[_-?^;E̾̏|s%^:>B;xZɑ#I;;va8tk Ta (J 5"8o~|h34&PI,oT"F=ྯp<0姒\17֢Ĺi1`g!|v:تvL!`n4쳜Θբ`]3gM\vzh5zHcE}Ӛ{;{tB:Sa<-Y~*YWC;ܬu(`LT־EiIyOϺt#uIǐ#N w93t"؀cQuEEX7@T>{:hҺ¦[5ո/M";ǐmiJ@y'CHHăK볌h2զ41" gai}%?"rSI^UP:2dmp>{ōLфE ѝGqiHgeI8PܑdNr;shef0.LQνद+eQ[-$tKF g?ޫ^cj:ltgc=7#V8l,Ku~H8F~cFqPO2"KF}UW>(1y,ĵ*8j&y --?`2\p0dq+"Empm'`EOnM_Gn>8.ГrĀ37!ۃ>cLfEC+nC̈́2E/bX0a"Y̧cV$oG\]?>\aV14"gsyaQ;DJ1naS9a@c3|)9X&٠05úyC{gR7Py,j׀cOnfm- CM K-&|4p1S^WIhҁq%wVO)ѝi3uscJ';"WI@ )Ȅ-& @ & Zc; ZTc-zp 9د;d+1_Z}آDQFn'Nz)o{e׺퐕7_7݂S)'ͭ(RV Q=/iBZTЄt%8ZOA Uij&erk !xaY@H^}Zo]^@_7C)`RݎRvgО#QZM=(cLV5Z&a gr 8mb4FAsn^3j=nd2m@=zIBFݬV#;O?GU#"rL1rp$mߜm:bs}l2vX?ߌ&t{Wͷ >R 4NU΍& E' ^i^z/Th t?w>~؎m*`i?/h9ӯhzf4kI>ȕ<''dֹWo9 G*= }?>Fvܫ@]qSo_9`9H]$GƙcC~^Ϯퟂd}5uF?3ވNG>u6K"X[k4|nk;q2 rsh(U,+v܃d<)_q*074残ƻcyj|8eNQ ȣf¹K)^9( :W8ּjtS爏'vM%HʌèNb:<+aP}jQ=> PP'|?sc#(5t&\mW[`8Ȫ014i*, X/s[X %ct]Ej&dχ,N %Xҋ3 HdE#A7D\%[ig/@(F /m1r=E%(ZHq%*:5֬rjcix/ l+TO"]Z͔1_ w@nӝE-={}a+Cuzkvhñ$&׀MF>vipjŒǴJ[g@&[٥cBJh:| |'Ec~,I]>Ƽ3 63kRpFseUc"RHvc4:MAeM&!✰S)tq[R0]൛:jIt #;15ԘǛQUgxFmԩNRj'D,J MhbKj1q6m9Ns 8:&88/4 a2Py9$RaXĐdbhD')&9C9:EB&I[P=;st:̰:tn6?̳,.$7q`{` yY>j[:3mQס"AK rl3N+9ۀckԴ ŐuXGM fw}5kX<"&-=22`0:0H>Ztx4"ttdíb Y+<\?2;$°g_bptLyZ:qM98ϦS($(1*9 l%Uhź_Ar40~Q I6!GԍҬm`Nu眬 [B.DyV.DEFQfg!\>"6ހc޶+9݃cw;LѝA8r8PaŰCX$=c* ~ZYNxZP*jg$dոRr1F}=(S5] e#bbqe5W V% A[p;%ĘS$gzsp$&5T+NC@Zk)幱G3U`Ր([F Hy&lS _,&\ӫYsZ lNIO? ? hm'(c_X8Iya\Ū,n2 6|mkϙv4je?OдdG .5 ?o|ƶO]mt0\HjT"p(m'K/v\͗ ށi zo^}QMq_=#d+}jk+_9 ޾Z,__/>y>։ (;ci55wSKލ@[sG`K+ӚDpJaϱEo__gQܭ8y W׹%bwG _o?'/x&J!$~puHs^ q4tJܪ$"Mp rފr~ O-^~ݎςV>\x<<UwL h9h L*/<(jJ`-bm&r# {!vr໖}C', wZf*(S,'f4R%Z8u!=XQY{b~ƾ7tƌ&K@3bTi52JXރ9KbVs9XCI~Ҿ.OEE[]8 k%oeXrl8\,j\쒢?o.fݹRyqSĘjuvZۻ6ʰo35WɭzV&T?(ή@8p?\e@6UgWOo}]mo#7+ di`HvoCA^mlmŲŞEݲc[iY"P?Ee62QGmfs&k.~&߇rq2YVȶ{H59I6Lɷ?MUM)|M],?H.`@ܼ՛ex&lw_2i ym_&NUXð%S4Z2JP :'GtoU2N.=Iދ/[k,혮Boc ~撚fݛߺ '22~m=fM_z%5kczx d_~uքMm4Y 0l=ŋ7bDmO #2qBEax.AůWaD vrU!VӔ܈i3{;VY慨lO q:v\g*[U+l Ja^o|x h[!=j 7]L֥YmΦb;\lͲ(Z@n  _2AK+w?OZv:;E#_-v))DpNʁAZ剽>zR#*&..4pʆ!k 7M쉆|5mBRMj{sOYilÔqq[)š~mx>7[;)b0η'5){ڒ)# "qhF%gs>Tv3[&$vK!BxcBҍ?)4@! D<5*v=1Z0%vtG8C$v 󊱊l{.5HB`J`rYy[0vTCTUa*nL%V'xW:VNv%S[G I!a^.?E2E+S x=8!=*@Di&C4y2UE[n'2cb y\&NrƝ$&.S wZyic z ~ ۚӓdhaƆ1 ODH.Aw֩lz;4nOຕYiAO0/! O&)WyY I)P>Ôر'_~a$ =ܖS (O \J&ST^G',]ԖHL(U[(cIjK =!J \KYSb|~ґn$j:{"kQFxzX$eڏx]o;nD}x\Svr5'6Pct}/-ӌrõ&(ŀ5HIS٢9N|HV$ )i[0vCU$=wa#Zګ ;@"FEhVqm"h0^ղHgP %cIU2xby XC/aHfFQ.U>peAcύo \*SE:10 5PD&*a<;[|q g"yV TR*Z7ZʋxZ=."puK8;4 :q1`4ՐQԥ/4ѱd9BV&% 6F$9S :*9'a = ? cGd#("! S歍1 >HbQrn5~֬m$9>FϚ0ZGNBu(T(f䔛tc{? VN]nNu\1.9O4vQ:!:ibL%\a@&Dq1Fܤ xMmČ^+O?re(M~'cG9u  \)sNNytz']qHd}2b FcY%1SRai€2eпRMco+`C.fS4xڷ1S~bÀ<*EaWrFtU>/MZSw'M)5RR3ˮtY5 ZZLqU@ 3{U&cRR=-Nwno0nCI Cu k9."aJW1@UF~G3ڠۉC6bèNzY)Hrd:\^UA;`\DlЭ"0`,0 N7΃xlq1FaTJGzQTDV^ÁX)$ yYPZjW(uzawSZbZ[6)1VRVF'Q1PS15B(Km.r!rw۲⹴{}OtJO$ =ecFA"JpPbFBTe4 zz&1೎W`T'Kys5?''L)M&#]QTPye d+TQR5n0z*F o7RFw}&Ȋfήz[vxgwmև;8//MK\LN1#}5 Y8|ۏ15v>' :`sfNy}^Q2ktfR]1:H\l), AX]~zLz|ix$9~zޤd#`m'O V_1 nH^'B&(=ګ?۟xdD}<2<ӶQԠQ W"Pgvyw3[grAo-gt#*_N>1htnzR+ѡ]N!c~MnoݧpiWՍv.`$̴bcDZsfc0xI/НA6ƀcC\%\@ zܑ.KЈU"e8= e$W EKbbrKSmv$aJ`gH)Q1ff :yq1Ѝ)eҙp>b@&Δ3ॊVA㼿v fcFeb1D,)$`R sϋV0514iV$ xVj(x QW4*a/ʜyYt՜KsfWq趃toہB$4S])inJJゖ#;M*B;aSסjtK%J U1 Io>@5Y\O}6&q1FaT넌Wns+Uw5:yq1`4.>^᎙twq3dwZy@Zm"&O@dnL}N4 ,v0슝 iNm/VirXCg|S>bmh);KN1 JWj :-p\Q54UR[e"YfqA+ƅtp2%`hqD18J:y.{8^,n8=ލէ,gV1?NqUx6an堊]r<_TA05ݲgFW1J(*U~o;3tQ)ݙ@sOI#xq$W`"g%l6\՞ogQ F1K(RL\~~%?X7a<:^MY- 0BzZ *g܄)ӧ9eob(W)H JFVbd~U-m2ڈ1 Lx 2re5O0/d>TnJo eQ_"\'4&QWh3Qh/+=.ʜS gQ;"Ӛs]*;2v6 wٌ4O5D{- Z x0נ(׸%ƒT[;WT_ṱuk*&ʋ.u(o.š,,o'UWɝ]m?O?ǭ Uefď)p+J<޳`M[ #/gSr_ҿop 6wK2HNgZ ]`?\v LZzcCN@f̮?.|U/Se۾U~|v+݂^=Sva6e,b.HStcH~Xwmœ,|j+{:ֻC]X@u6®7CTSX܋7/\][.e`VEXI+|yV.h-|KQVF:p|U'NC$0 t_~ 7(j:`8u]=U}Cֵ:l܅l(cEugkŹ2ܳȵguORpfS.#htn_\Vm[5|=1SjC%"77X+./vlNy78l;~wY0mr\M?Ǐ+:ռb!~_vxyN0!hH'[(ls;.(Aev .6϶Y-ϛ9 20YNt@ƲY<q頠nuyq~C4ZP_֠Z[FW xY.8R`vVΖ%*a =%3MD 70ڏ:a*aT|fʙxn?If0*O[dT%$ rZJIL+IIn5Ѕn`~A`F.غ2ƽ>ĭ-E9:=rս/C)^0\aq3^> 4.($}':.x=%q38΋{-\a,rq`LJ]2d]{o rဉ9 A8ϩSA-"'R溋b}X} 6| nytya w/Q5$#Hb?w_*?beSOR; hch)4"90^p<6W.6:L~^Po\'>Bc "#{\cJĢ=]NdiU_Ţ W2HLW0B*@A_5[>H!{jO;ebOA^tV%d }OepbK?;x9w_ksN|_*;쿟(!pWyVa)S&T'`TX'f}ZG"{'c[/UI]-Yyn9xe!R# ,p )$\9.\>ߺFY2EKUK*G=ߟQ܈;/q,w{h./X\i&`Z.Uu6x=Iʰ|K28*bz]%9^H2D[oYYIJ̤2wPM^.^Ŀ}@skkX.<~^Ps^*꤯!8X"xKB飝>dh#Wa]:&KM-dMl<|S4ꡟI N{pUe x}g?c6M:6/Icg}ʰ}OW(o.0텁¤ A B(M͉y-9bO1Zk_SKTմs';`wq/IO_Ƴ]"˭tD篳oV)cVRЏDꏄʏlH-Gڻӆԍ,3/6K6ˮqƹUtRro$zrQkImT6y<3dDH|iX)D,vRo/j]ubzcOPUkYh}>Ue|:ﴀmd9 0;j( W(1)ʭ@[M\JS0%6+`oMB*ek`G:?~|92j{n6ϣK~:x Z+m0xF6f)ZBC'_/l9%~4L[M,.@1XA8ƀ Bi x=U/x&Wt9BYB4A@j3+^Pk&תN|ӞV9;xh8p͹-,$slջX.r#ҡ=Hh#phu 3Q)"5FNko*[-1pq~lfۗٴ2 "ӘqfUͧI֓_"yJ᪙!E(F\0s06$Ep,.H%hA ^N.hùFYk|`gfйA6*W-.,F;x';-q$Xb%-))f$ sQk~ B\GtPMm} nۆ{4.Ns͉z!˩!8`6pK=JBp1a<{A/fQD"yհ 沲iz$˴14N!y_wvf0vר"Sq'WPuA44GE'k%'$CT֌>+So~ !=ZZՃb6vCQUaU0˹ْٴZtKRۓ_gPCZWg6d2Ukf[b Ua4SM̵s}aUX#v)"+-I|=!g>_CYÏ>8 Ƌ5ع ƇA75ҤV A(Y|՞)6% >m۳I(IM$BEB?MAɢ9?9Vpᰰ DLM[OF-:y4-ubKƛ$q͇@ l(!znB1&9 9yṊOoVܤׯr@oϣ3egG  ?sh_f1)oϠ}3"s۳} sSӈAW͔[b׭v> "qHQ"=WVUQ[E\dURH[m[4ȓqwk&ZRdͭAЙ6bo{c0} [J#ba޺nLEk[wtܚSwf|Yn@{uoU44cG'c#\VED[q#krfA/*'* NCp ,` LU (I vRAQzdT)iy*HbCJRD0aTWXD1%vE2qd%X@!@-3/v&w6 ѻNu_=%S9!|E#&`)ANI*\ap4=4295:gAȀ\` C5:5_T!V1 3%($]yE#/8{;hl˅'jĉU6p m͝ rYn<@aԧj?3.MF~sK(IGëee֕娪 h!#ᧃ3>/]TTe/8.o󚂕J[gd*9 QmP )ה O#z^x&C; 1̧?E 5)HW,ZpfX`NiPDP#-EP'$}Tdf{06ZTNsڼ(pw:|yH&T:Pe112 WAT^:%C#38G;Sfuy$x<`8-9 mԠ< qAocՎFy9hcO9TTp2fP cdQ\l4W:cǃx/fGƮidGf4 cRl=42Ni>.WGRQQAH>3 5 >u(F KM WT5JUCִ2(Pq1|F[IHZ0P8ldqecY]hxCHgHEٯ~Z vm+!^ƞfqy@Ӎ8K})Q@5#ʌV-fo=2#uiIɽڒB#pgIJ++%U1\UL)ύh4 R= zBIIb!ik@:WwԈ8'lTxJ}!??I"|{2J~~Ǥ=42CpFl8g+RѢ1$!5vl An40Qs%;]TOtL&I3yq!S䈨9! YgLuT𥄬Y0ia{άݙ=3 =N 4Ln BbD^;kBU5ڝ>3$kV'7rjBa5i[noV`9]0LaqDa=-!)5\-z<&8)kMt)Q Ss`b*b&g,R1z+eφ>O/^(*ArT 㴲*(S=UfyRjgԁѠq`[S$zI M tI8K̤~15ƒwH~$7yB *Liw/u,/I K[>^}.Z,J X#d߷ pIcpHRB#|k=tf-ZPlX2c&|Mf)Iq@gpD3 17XcȐBOAn1Ǣ#yY,X[Jj}@0>%vifb{v?mAq6f}Ёv%zv?2nj%X"!cq&%*.1שX338Ru> Em+8 5,/tZHj:}j 3BID7ȓ3(=䮒e4FM5hŒ,.#n9>,/QʘJ4iP"I1`sw2eN-@j MGN$+(,_n7-WC=ӁcfvPX>c5*v) Tj #vR=aځ0AO!ys0A8@gp}S9(7qSY-m<Rd-Plx#5!#R i NΟo.LhJ d9*D*#c8d$6@gp:`CpLMb +7ڪD8TŐ qd'%`0cROX4h} HOTexE"[U՝u3<Ԝ8iJa=$E88)gbz#_Pc (A8S5)[,ߣHCri[FXoh!YojjUm Iej2J6пiV`QtKſUV{wycH!ݎ{tH.ť;Agt2g^W\f-bJzJ3uSŤ!@d{FB4M`.v h)W<RB38$&\ #$~㞑ݫBŕe?$qF'q i{~_Z $4zI7Ilj;z`9e_.ZE&4{I٦ ;L!_4R'/kFP>S((vA`^I/'m AM O=%JOޫTzƨ(mѢ6ܯLn|~QhbAÜ+cӼ 8V!+W,fz}.e,Er.4u8 0zN8&L6$Ɛв:1=‡/~<M"C]xE"5!(Ӷ"o6L/}\{D'rԗauGe>{jtj'526Pe-LA=#Z \;HBv3[%[eyyψtP@ۉ]/SVuh@ |LmeC 8yc!4zƇv]q"$%3qR\yO[Y A5 Z`J)B 8(ÄcN{F`'i@򗮍9,x#Š!65n0~Pē|&$=8vnŗYAq' 4z|y&+ֶϞܻZg ']Y f>v/;_,MzƆ;e^|6a0pׂg5}XJRs'Y`"))7Bo..Y=pInf~__߿9~ю?x]m|};z[o*,y o$R"3Zf9%2&S aR ÉLYfsǝ9+݅3r>8*R(*{-|Mʉl0?p`}+hAG L5hoW(BhllAlx:[B0/CjtOXj?0MY Šͳx0HJD9wQ=s!of.qݣМUqɻVN Vm h`(dݖS7v#, iܴЕ;CםJ pgUS'J[mbVnx\>e~A7nߍ~{& ^>YMl qb鈿8*5H+'?=m9g.h^4j:v~f ]杏#ݼy=ey)[o$'߫/v?"#M77yyR\FYZ+g7W!~\@)zݟxu22mo]M{H);r`Ջ-I!5p>Fk:d2:#)K^2\Ocvh H$7Glb-`U0S&Lc B "6#L$HK'ğit_9 ݴmL FlˇY|h3SҬ%N]4O7"ػٿv> WuuV"{)u =sO2tϢSM m7GF25u:jdžG{-E#v%$G#L9[ P9yY^>sS rΌJqvH%) W!I7õw|-{j> :?ε{h>G]%;Qdkq??׋ %q1LP+_اq.U=v {Z tZ}S:G7ZY ޼,_|*.wE$ꈗַY6\\VRv\Km'zwfBG)Khp#%ƏSn"C0Q#6uyɣAgn t9PU (U& b+S|C5wft= sX>?̗wʁeeQ,ư"tʉz?&ud&KXy?{>ۗLÛWxΚ@Cp@ 'HWޚ4,4kC.?lˑJzb6gk(}nf_Lr3w׋)]WQ7_k*a,Qa Dx n2!9cEYSn@x̴ ""-IL)ǎVRiRb1cb'.h;;G>3Sflsu'B<1><(N9fqL Lk{@VXzZf UuܙYBQ;aBL t쉑!46aAo$E}&ołRhMa?][㶒+B^6z; I8$`ـngmQ).3]utJFi{e~%^זxL˝cr`Sn{¬[3*盀a)g)OFCpsS԰nnw>DVgsݾ50%~'[yWiR-__~1ǭl%~ؐ4\fOI=wەbɞ-ִTTRߊBmXF|}@؉{u;b/`X'VcR_geTVkn9㟇SęgjӸ"'9 BH^^}McHN>!L0!D5H➓O eC53QƚDi&\TdžzܨPhR?XzSO@r 1{'LR'JGP̭uyXBX/(USO<=гՎ?<(I\yJ XItJ Wzh=Y+6JyeTTiz8`pސ$e@ z< Ul0_Xul+nN>pnUB_}\,6*0hd`l4L\62de2[7y{!W7i{}],|u961vԙvAAwshk5xVe).׷c~ %Izszsvb-z |Ruyof: U-[GcY õzY酪[43)fR)g4E]XRY)rYjZDDSoPԾals {}qQF]Dz:#r״ʭJw\Vz%9c+O0t$X_ߓ*ں̰?LvjxHM,e:euȘwm\|&cŤ qt̢uq uD$cr:+n\j *ӎ锷I7O_[忖yM/=⫗N^Þ1{38%v׉ݿ%vsPmRFw-[dt?NQOUvMv㷟<p0 P8p9<$ͭ\Of|f-K~*bv$ɽ^ٍWj*4p'_Fin8Q,( q^"XS$IWp[HCz;WrwA3bHͶ*Ǧu~>m: eXX0hŽu{FWjU0 -k$rA -zwۺܸOihnV߸_āvH|>"fHmpR8ZEF Hm -!`RԉE>||#IRaKadng-<%1Ϋ,1i2Ěq֌[~Z3͏NucjtX]w0`1ȝC+zG{I^m^c{hCcn{QyяͪChCލw8,z^j[bݑ3wNWԳ|OM4d=1%SG}9{Y]fEmÖ2Nt.Rݎ?Bnj!!^?LǓ_FwfqJzj 9Se4ZxY%H׿?@je^-Ղ_u1c<42ckl2fU#Ke}ỿEMξvMx?<,3T~sXVg'??f+̒v5=6Igڭ7ٷ'j|?=u&tEs{*jwN7d~`GȺ\SƇ"M.g2~쫛䔏nd&>Ái 6 J;E<*'*cCXq9rt K2^ n,a@{q0JT6F2B &[9YoE ~^CIz$ъ3!EOg&$Xt^D̩s7|Bpt2տE8EaV.f$\fte"]{ 20E4 T䲢\ B|v~+l} 9n*I5K0OU$zl6-? KF1 'NNd2]}]q *()SRP~S:*G_>_7onfՃelϑoWۥ@ZfV^;]?ͻ_~|03g%m(7 5<;h jD]@.G]M.pECn>.$R\ DEۛ̾&l|ߞBU=\Σ2+6C5ȣHO+Xa`fd "x!.>cectךߖ0:D^ENTp&-H ieGs$䈥ܸQQzC-}zNQZ"h>)-)F+)F vUbԜb$I+R+y$*'0EZQ[k^<)L))+<_c:LYaJLBDGs%2= ? X ( |9{k9bV>-!GH_E狎"m2΃$||5}Qh:M--^W}'Q#$^QByEh}fP,QG)JN%N1\}-Û0}FiegAʰt,T|X d}-覤/|,ҞC^>J.Fr yR[8o˫۵V ma h A"+7+hc 74~;8^AwW嶥/S6MOetݝIUmS'BG}+ǟb䞵W!l8j$^W&w|EiE_DP7}( ^(cW<LfaqX Ì]wY52$*ѱVlj:5wlb$&)Cr6iSR N H !Zsy{+m_+mЎ\kAWu}-dzju_c*Ú0>kWDU]Jp8vd:`=#:\$&h0ؒ8x[6e䕛߬f ]WfW,g[%hAp^[#59cQ>&fZ3 Ś;/wA C{`Ro(^-.6Olw{M֪uE`}e-($"slwPds8$F*hU6c)#Ϧ_Z]`>PZٰvl&k&`΃Q0+(탚D.ngQ hC~JQ5P)me{m=t90.tJcnZsc#8\{ \<~.Z]Z0 sD׮G\&$ۯ_RIӍ||3Z:䮯^q!8A/S q" Ajh9+e=Bu.]t{@2ΛteHU<|FuϝKpv=+A"7ܔCo7 % J`0^<85g] iK Z(M:b 7nwԌ |$R!6osʹy,YM(%!k%xC(t5N;oi>v&Mo'AAfAdr=]x8^ʖt;wp_~(t>Z-]WxǓY'ZVvFr#j|r]4(l+h4Km>GYe=|,63*w"`f[-GZm%S x BL#skNsmE ,SRM yfԞ]jC5!8r-Z %4!̥a&9`mDV1tN 䒵՜]&Ҫ僻R+σʥpmd`bPu7Sv,~fSWۻ=ʖe{ (yuc1R(!i"xˆXxp#ؔ2&UjHGUe$Kb3CZ۩I^cpx0xC@TQ/$eF p 4[9Y w_~, e.lړ1+}jf݅} \0"f O%qDvu(5gTk-,E''!63ϋwva$H`x`0$7/$\2$t JL:c/ #ߴomj-YɘјR%RCrL("6v?fsH gCor""Y\Y PoC[Z0ͅ[\Y4Z\鼛:$|| 3e_Lsrx?[ze=Ћhř<) D1` N+9UyFzx{@NPջH(P7RPijM…kFW&}&d`.ivue>B|v~+l}ۚ>h ARS_!^:&gˏb%Q‰SS+Y4pNX|T^.Lr(jͣ/jpݯŋ77qHSȌԷA懃rn\RH-p3+*Z_l>CIIQ+$ʚ!604aI+ebh]6.d`Š|s0G}iPAAm+T9<]#0i<5䗏B +kr :eXn4l#3\9//ߝ}.{K?/?Z36_C@4]x5h".Y&|!]gNphf[ ""pMfr6oj!*žBkW<FWיl@|T^=ȣHO+Xa`fd "x!~6Z. j1زGkMoKQ"/Q"FBjI*8` 2#9^rRn\jl(=~ġyq>=C-A2&Go0J9&<7`sL' *ЬQͿ7WP/T6;ik+GE ׼T ,0 ,}e${v =nnQ3iuJxYd}<;x}#>, >4y+LXn//^ts^ſn3}?}y8d RWcE\ 'xtڹ8%@`9xhi>=ܖ+<H{RsՃ6/݇FҞ!ګ"<*5dKZ߲gͦ#֖[ 'xv1oxp/WOnv޼oo| 2/SjfmS[tPQ:!10(YTƤ*7ߧnv FtW7|4h|UHF!j0GZ{ 97Tbf~&$]Ϛ+qF h(qO@Hg%;HY Ynf^zpOPdKqU|kob'xPyIޮ3xDmG"9WD1YjBTKLҬJ9Y2˽bn';{uj C07J{W sJdh.Hj vV(ɸ|$F>gͶDƁlbpiop}k;[6vv5v[7(856]8R6J[)K2rHM^scްJXZOIwZEsjI$CnzGVKաIjJ16zZ˨V}u~ʓNy'RJ,d@PC'YZhͼJW iNcWUt+|Ȁ9=هXZ-PGYz)z)s*$.b˧+"6Y_?ﶴ}yoJW/4߻svG`Bc( {a>;jhIJ7q ds7:i/Xl-.GW7.Ⱦ_ywWp@$fۂS-ȗkAXxNvm/۪C-w,nvlF,(As[%`8^V;w zO^V0@ГJlz0Rc4کwMA*{}읫 W^~6=㿿y}ߩd*ոWa1bb_db~ q9{>~t+>T't~ŊX{qCw`nܘWGeT*ļ(H p=}?880 3W-o+&gg3?Qo[|>>ٿۚtotAߎK#g 7բ*8|kG0F"*U]ni,|QY5qpKvPS&f4gu_vg.fh?zs`{\nmnv=H:t| t%_t=u4xkF @\s2F{1(P*! aV־p@]W5c*ۻho./O cx8 NXZI58B\ (-ceDBk>NQ+KUFS-0*[SfLJGdϫ}6j̶RVXgK,';Wdc{`# rH-CӐie)ԹVr*9@+lE˓,diwii(](y7u/y 7#˱?ʝ@'mwx\,/ ZFaAIFp9i먀\Ч~= Oח~Վt/kגv~j ?]yc(=1/3(K gtZ퇴'!tv]]IzŻl]31=쟼w?g/^Ll .YL<6>'Bx˷-u԰?6P:jh-6J!9u;jd,3=R *q^<{qEh␜w45RfBL [j՟ -%=;nvzQ ;$,/R}T5U{niCqn-m K jd<8]ڙz+o +GBSۂi8CLi=;<&i TT]d,PZ4oG 𖹄u)[bTJ4"Py I6<_0brQtZ S}6G >GN׬Lt3=Vi59VՁElW*XЎla~j ps3-86u z3ox=Vt@Bo b6GWY@ ŕaіBWHXcðe;$,/F/Sp ϥaONWmp=Hj` gɚgZfJq~࣏ye- 5gtj˖wHX>A\oHbeEAN.5 G dne$d@rZ;B mp=>8OM sR`5wgZ$0n<=V7ҰDbO+j ^S.ɌC )6N9%0sQ›sq~͛lk8ŋgJ e.s-Cs=՜#I!j8.g=;G/~z~(/#l^UgVc^!3&3% 79fKBMjƔZ +[d,@N:oaJH~[9jjlZH.|{$,,Xc~t($7 tȾpX#ahc5 GG,V]oCĊ[wHX>E^%sȫq-Oܕ:of{$W'B*6f :sW݆-n,m +cջ1jbKs)~sb=7׬?ʥkP=XQRȀkJkzds)PC j`gF-4XFS|+-њ7=o4y5xJ!!l(\~8 ˼D `ds-k3l.ml(x' Tckso#./Њ>Idt&%jvCrVò˱G )r-R:KMWEkn^F6uHXdy횎`g׊7~6¨Y +O|whZ:7hF7$5zi7H +h/ŋ%F͠8mdi B[6G |ϝ:]Efs,j~HX9њaܔqqTkbe +K^73H.ˁFBCFVG^xn,Hi(dH؛[ź{$ rQp`N<ӣsw}G(tz[<=*d$ > weN?wu<w5GサL|rWߢאG7^xy||[;ţOp 5,!>y5~g vaI8%@`G{>*هͩOP^q׎1s>hj)B)jo`xاZ-+SU&7qN !fsJ+ʽB-jNcnNC!+qVv5T+F{&1h&|:(j5*{i񖓖!NZk%uFQG?p'(!G`5S*yifKM&Po5i>? z|5ot_^Dpd"m T˨-ۙ^Wz3%גO{*r K=J%3wL(B||7nԵj"!;p] ﺟ.GdJ0ԃ:sC*95 .۫.LiZˬ0-ʨ[$?n+ٻ6r#W~i tci+9$m]dɑ\|ÕV%֖do$"3gaup{@@D4niEceܣ.|}5Dh(ḱ,屔H1y ;TL)FP1H(ЌA-JgH%WࣗRswKˈ,mdz ރ f24bL<|rgЪL9erE^q!jZ| bgf"B! K&Á.y!,PE2:,܃*9]~[F|["t0'!do] Q5fYRRSnL$GUTa\ǠO47^lB 8ю6^zE 8gH[pN st C't?U`Bm½X`Ow,NKfAd<_Fɽ`;O Pij! -z-!xFk} vpH.CxY&=2No8Oh=JhHy>l `Z0!tWeԺ(H_oU*ffl]+`0>lO޾Կ}ľ]wzUR m/ΰ)rm}߸OeALTeeiqo+ R7J8 wЫe IE,}两Q&5IT0EHAPĹTۭAI<%fa5-?WIT+`>-I;^ا F Y.J܌pڦ X6V.^ŷ0Ya*fiAlZ{D@ B"BR! `v"RʓR"1\Ek2X0UZ$xNqvm@5]o | L%2 cgZ.)W ZMVk?  _㑏 t!OýzFؽ (G-*hӺbNjpp)^rDܺH(UI$8Os i5eZik>5fFvo#lvGEg|ˇ: }xjObcY0cntQ#E1\}T r[:2%E{DxЖ eV1'XtGjAo8_WBlq+~qh/nק\Nf;D汜My5Nfmv;"wwVEKQp&|#3):5cw kC71g.hj@8^c[ y).k)Xj! CvNWtQ3y}ճMxŨdbo'>R!"e`u<"QJ{Zsfl>wi8݌u/gnwtFU$=xޠŦf׶Bܠ vGsKܼʹVDn%m2q+n\Cۻν~67Hq^ .u9ܺ=s8v%_{eز\Zwk=7<]AK-CvnmϹ̣[ѧ٘[uv.vf5p̋IBsz藿tz;LK|?<#yJk+(] 6BMn&WsIOTڥt] V?4|ؤ{>‡;U!orvÐYjmɛ ]f]΍ƣ./Z٘.u|#q2ROBoM?p? 8&w,h1Y\vc# ~Yftn{s̑ pgacsIl>̘STӛNoNRoƦkжw w~_'2wRͧzrl>9ߝfnN1;1I\*A2~d/:6yڲ=Lq0* i?sq|c1;ph8HTD& !5VZU8 Pkb>s))yDՃgL~}OL졅 *"Cy< 5-?;<gѭnl K'QJ$n(DDn:@ť0xTBK$ETG%Ge!X;9KQS#e.4!7wϹ*D&F#4KEG&aB7%8.ڮ}5 lo^_/KLI 4六c;^=nv~[NG\֚B[4.6r(M$Dsq &!(!u*X( vAZr$8 n'BʂZODޢh֒'0-@'mp"+$X)ըѡ"De-۩'AܾX  e9[vGSVqlem")@mdRTК@\I\b^K/؛ h(?iI;"bu$TqVH!IQ%T/ਔ$wԞ"̲jDߑK8 VJL@`)2H:BbB8}/LM׿lyzl{#?w9BP謴=Yi8~+Wޟ<|΍2Ln?1I?XTOSB %yX#&8{+dMr trp PT77ͥVSodd8YQ$:z/nEb6q3 7!C/?e)i&Rq\z[z7\H\B~W}WDo^Lo.*Z[)G@V8dIim8tJc{9̈pݿgO~,^Ϳx ։1JY }Ӡy5-]  %_Ⴙ8Nq$-# yaX0HR;̲|@E q8*(`bb~uZ?9Kldݨ5sE(\:ɛ ґx=J෪tl>, WXU AU'T¤>OP fh/#hHHLBDN.xA^># \H☁ 3Pe  D96XG4 nXziјV!!(3uS/N Ij$f.E4tAc-N˞zJ)m)lD1˳r꺘EmuE]]r['{:g! N?0g#+f%u5O1rEh09v̋Qs-9˝R 3?1g?ͽיڝ5|PѨVK+:Hl䈉0+TD;> -3sLqdo> @^vؼ9=Z\Mϫ>s#Ljcy"tL0l5SG))60R< NE;`1VY?`,6ʌ ;6`#*$;A UtDꝁWYπ'4%=Y}Ranj,nW,K-x`qS7K5՛7 xGJ/4%RT"/.im.h`kCX3(e4|dм+%27z ,RhU}_=8}&zKkQJJdgwxo_Cқ1~Kr@C&î[#t;K q|$[ӗ.%3ld͊!ЛIxm&/b}g@)דf٠XC/VMjAS#m^2v)Wv}*܌z3z fg4@7IL vOOz_ Yq"-Qfox^lsw<1-OvG9N:r`W2>H `*LEuS0I9uSnH"XazuDloUGhUJ͍..˼-i.;ԡtꠃ,d;S=WCO&Nj*CCֈy*3.]9U2D|; DG1%ew3dvv.r^*x SX:J!J *V4&9> 4-hЖ:Ur&LIQK9c | \KKȳU26xts ZabmynbԴyZk&^ ]VJL $"(B%:Z\`y A%pnx2-i3u k9a)TD2v̰4N+1e8 (O lpwq¨CuoM۠Ln ǼT< =BS@`)Cm@>hg"[ff<áGq=rmI!-DJN;5$Qj&"A`h,y9G1%9%h߁x=0sg cK.hE*I@F)W9 ;:I}6 Δ9qG 2gj0Hk(B $.>0kܧ)Cqi\$`_տ].MMwq %L$x4n6sp1/:+Agm6kZAadf-Bhh$M}64UR?->r *'~ .?y7W|Wo>bN_釷@ v`\e@=[SbLͻ\u̫\r˼GT7%Q1۫aPqݜx?Fj26#+6E{PdoKP􆁹 _:6p".kmEmnCxI9RK,VbDBI=%(iL!J BrZ|o(=ǡyyI>ס[VtK-( `E" ,CFcx?`b:É{TxvSUwg㝃O:۔EzUô&i r9{)׵9?.*}Hc1F1sɭ`` )&}a[IyGu_΂1CT!DG #lSJ H|^"[[hԡ±׎)L;蟦GL"%ܕ&vHe>!jSL(>wHY\a~c4mmfic͕92Ee"S: rd rŷ|:eJ- iʳ:ۘ6;?˦N۷mk;PY7_mZzݝ c6Mzgq|b{.VajeW4(΋|0 jH;FpĿ)~uq/̜ KO_#D_UW'5˾,6Pɛbjހ|o| W&Fe9њۼ^gYm=}-~%6U|LFSBvҭ%+zEdh=IH߶n>ȕXiy6'@FJvYP˾dẓ cwgwsǷ,z\wCOirt̕JZ2G4kݐSToȧHorﴣ*Iu41")p|4D )N'  8JAfhґtjQ;Ca XRpR8apD L_<ݙu!ReG׷l>ѤKjlHTǝ% "LqF4x¥_RTyCw\,w`<"4U>=Hno{O#AN3h>*ɬ%)A pnq F|G Ѣ ^̒d90Δ!9R1%p=KRH:F*g#rYz#ag=Hz!1/T`-D`!hU +A)MTAkhQSR$0<7FO_> (D9lCX\g X],5?b !V8(n~}zsäxeۜ.=.7ov?[ɝ 8eSV*v2,67`*$[fV)$PO(ww~9ۧ? 5@sOFK?-Md>Ru)Y+hj.;1k4Y]Z8 .fb.2CoհA_yquz;viλVW3eQ4NScpruP!"e` Q&g F:tJs;ü?̋r-]oktP#ښ&A I 7ZqM뒋GW!(L~g *TP$L<4/DHWKZS҆xfK{xkv~5Z~x4_ :\$ m`DY \W) =&1a0*.}[UIX wZRS04z)DL{km$A,#Xۇq 2631)! (&)<`) kUT8x,-PsI2b$mA)iG-a{u3oWE[+Jo5 -x&Q=:q#4U.񔘉דB\J8scc IJ I$BzFKN&sDJhU&F]ej\+RA]}ŎH]!QW\ΎE]ej%?tuTzPWF]SN\ށyꙐyߠqtzQ|?~W/26cRn?V՞B)?X?/P^PI"XJg|)>a%J![c͘s]X)Ah*r %9%L,h@8Q%i!9rR60QUD4)JŜ)2M;Oq4nC=8!4RR&򘍁smp\3JkWz $u!Dk.]ԏ#er\Mpvakb$Ggzl7Ej.Uލd=8y%ڒƖyS3A$ilfYޠ 8V: G0b^ɧjmG 4_[eVWljVj\HnX"m:B{/稿RO?|d=/R 5rmVˏL·ϵV)V(' (cgUEF? ш!vM~*1%6ܴ1$MiJ ;֧Digj6јdJ>f'C* ҧ8t^YϟuXi/MސNT (u,Il+%&)xԈJtRQ C.ſ_PD{F@[!L2?{WZV[T訷]ꨅFbl:0'P>:'3e*-E]) gutW)0i@b2nN٨@Rp%wz=b<;`.ԮX0 udA $Yt^_*YƆ%*H)$C: TAB/7,Ӎt)iB oᝣRsu2ۯjʠRѝޟ'_;9}cۛ}p`^\76- GUt"zfUlj>FWu&:>:5VrsVRu9gμWl]4rͥx: 䛄?[l{yo@H@Z'޺pR=7}ѥ~ L|č'PJ Բbcbk}Zߟ.+nE!QZvhPd(wYs"O,yL,bPQȁ{_>kR\Q[\eƭxrF$f"E8(FN wM?>w9ϗ-~p/ݡ'݆Hܷ٘7!^}\oFO3ty hEIT[BB#8Ddnoו`xgQ?0u%$[z9}b>MNi͹ J8,-/ Ǔ>of]^;[jmDm):r],X2_Ukvʯ#W8J_GUTTjJ虐w=wK4cd"!F{CpC\ړ HqǢHhe%MQhym\K]9u1Ku~tJ.%%37xl*L)Ҫ392цJ窲\n.uC>둛_sn3OOmCۓ=&\n/7}ާۮ δnbϷSwS9mkJtnRD\۶O]f1_ Ϫq4h2}zIcvTf7!$8̹nﳙ|nyev~(-nA?#g1ղ㺟@UoI=_b}S7Dt!]_4W!\3yW,J<+Q%G#^KOfRKNROrq҅!//B\;2ٛC(PP*F>Z㗜evO#}o=obiXlT}KH5=m{\w;s]8l~q蹄g~ucv>&7MnNRnq )|u\l}8,\kCӽ<['Z8bu3Eś57<\vI  sͲnA.,XM(,[;j}bxƴ^5"U[jm|w8^Odݶ+2=| 0hZܚ iŬ_X,j,n,i:!c{Gѕcl-y"j #fr9#Q)#8t=SwBͺQ4VZsx}(ga[qR7 b!iR0I `I%933!lD`QH׊kW)]80dOP5xc#S"l:y]ukB zBjϩΨjG!XQU\vj:HUU;9H]9u% E]j8uuU4'TW=#uU6l˨<uUUriSWF]eZUϻIJHf^OBp' "<#+ F F 9zȌv2VjUj9kxjCT7,LYFC\qR`Zo9ci(Fz-ъgT{T),TW& O[J-'i/SZ0jHx3j>s(:1vuF7DB=_=W>wͰ{bKyriԋn|y).'S?#GmNLP ~ɘWZzLBBVNZ( V-yg.M.491CR;JHWh1P-m2ZYCU+K2rbHJS[n֖ Ҡ&%mIP][ΪS:"dg Ry^'|tUB#$9@pTA8ey2Xx@hiT?W=ZEZ :RHDʆ#J.ˀdɽy*4U jaӎD )~ %4GidZ.: 3)*"9#9W5"9 F}r{4eEmyNz9qmc!H˸ZCa\ 2AھM)"|Xx]=#P˄9M}fE,{a]Pɛdi$3G=XeH2!oI >۴l 7!YK/ƆlHE@B<8N恴ҰEe.b " :kF>d:cwG F&U!-o 9x<ʻWV @4f&$ɵI)XF,)hX\>EjxQ R^%U@ࣱ&' ,XFEM1 6*Z4nuLy+Cj(>B'R"J.5ݥi.^M6W|]A;7džkx.Gx pC榉,- a dE=wmkfFkO:}KKz=w*,z mEOO|BL藼B04xL-x;Ӡ1?hpL&wz?gߣh p^WHI+\xꃤ)?*dAR+8u+mHJhbo Uts)ds.ʁ3("TՌcsUeL̂!HN0RlKQMt)2*ښ95vr]X3ު ee](.{eo8qCiwa/hT),"& :%FlrH"a9[kz#a.x#90ѐH)&O(p>f)Z&ijH3e&cHƬ)}f4 NYi& 胁bZ9siDa֐ MҚ5IۧA(A>Lmz;ㄓn:;j{DEh#E#jm6g=2 u3;HG%216iv$ bҪQ/=-IxhQ"UĺIwr @󔑖:䟊j-okk:|'gx6{ЫLVGZqȪfJ}3W0Kk*D928B(\H"da%1e3)U #R2.tAX"RRVf`*CV2pgzkȼrQ5rvOQC'7,- +?Պl=CGGJ͙לy͙לy͙לy͙לy iixU%sId^+:+tÑIlhD?KTjow[IΨP!ب#*|DgRGPũ*TZEЧ#^\ڙK;sig.̥3v\''@N{.xKqb@3 qI&b7$' ArV/#=-%I@-K4,iJxmA,uBq(w#! /V`伪Gӎ_,|Dn쎩pe>8ܤ=smzc"rx{hd6zx;G.lV~&đG.v}6ݓ=f^&Wy\sUx'1qtv<"3Xsj805>?Jyg58 N i5+>B~Z]N?i5Fw%VB_gǢF+6r"RCja>cNα"NUX C2_^wjЉٳ7ˇP&סU|i/|][>M>^H>ET ]Yo#G+D #2#`χgqgGl͉io_[I{})5AM!2 YM77hSE$j&8EVYI+kb. 4P;A{Yevt}_&ZڕW:lxi'=__Nwϙk8g̞1ǘ+uϡ-|ug♙1ZZHz[ՅکE=Hv ._dGڶ Wyz>-{FZu`!jۢ9KDҀQwAkm ID"Al?"a }]06Z%jv O@]`hy-/Y.opm9CR{/¡0:(劔;mBL!oQHLQIkhҼC Ix4qt9aȵfp/N$OlsiX}"Jm%MZ?9QZlU^|ӯӲ̒d2?A&g>|W}0fu,L}O/M;cunyZ1Y_Z3J@uN1,LVk[?0z;`!x6_[|wVdLY%R> ςl0~$^.7] yW._Ja;?Sw9_ TwwJ;89;Qɀ5mjOG3Ƒ~KǬBk(5[4= nFuO _^?xw]r1'Ḍg{zoo8ƿ\^L]; ڙHӬVڙm6 #ZI,SNe`ŪOǣFO /|iUͳ.m֖gedNB^H0<[_NX~A1 Ngw(pS4͆A39ן?XOGwg'ށk;%hx',Sjjo1vԦ˧o6.e޻p-0.u딁0$8ʼn:]kMtUq=N^6ɲYbJ,q!?ӸJsh#F !J%$*(dK֡NW8/2Jc1 lQPz~B5S㐼 pU ,b<+#Y2['Dqi2$%^trsP1U#cnxvS5ߣz?x/7%ik \0%;aZR~|9Eb@ghM7V S&ӗ] z=O~ K$E%@ĴNV]{!r}z'!,O_IKJKZcmNns6$ i+0Ubk!CA1')zk\L +i:%K1V{f-uTmǷ}'("to5/{K]-|JG}o{ YB8o9]Te7ix>o`468al X7EhE0Oq´wXސZ.=8Wj՝ )BxC#GM>'Q Xؒ29 - ]^yԸDg:=΃k(MEȃiţi8<,S n5oT߮ٻ}] *=,jCwWe4c˾>Ƈ9p[B]q^vRcsw1sSsV>M}[tP} /$V}}CI t*ґb̴%e@BbRY#[x6CN5v:kypîWrQjm4;Nö ]CwS//m϶=KΏd|V t`A풎Zi}l b 7rp$iJp)->k.y*!P\Td/2jLUspg˹W秧-~y#8~t-6Ԩ`We%niY*1OԲd훪 NUKZ+Th,mP8CftF݆C,*ȢzulI[+dA%U!(Q@JJHfPga%nt0`vƂ-k R Ul 5ۣuNW˹"VzHe\M/2Ϫ_N: )/0LUeK4_UK'YZ EkY)xc@^$6fnh9#J10fk71dQ9+BLQ֞d"HNN3N̋5*, Ʉ%*(BPj1&ƨq,!_UOΖ%^}ZB rArLթh=f m 0GLTN&ZZ:e/D ?[}?N'M5ELRC| %&:T\1*XRZAx1NZ)pOC$KX)bqSP0R)-4ؔ%Ya0E&HbI NA?&^Ivpo'J7b )K!V<+f v>GtY5_P3y$I--clQx~L*%@'I,&! |9L" )E٣;X&] wtF!厎>@*GGwoCeWPRl $j" 6l 6ҝYYuSs_ϧg??/͓ۢLnU1!J+6+wR~MJG-wc<%_E X M6)b,BdQ=zlճA#8*ip%Ɉ -2``$io2Y{/%ݭ:BCtƚOv Dj*%`klHJvrga:nm/˞tȚ2+]B01kOdt. E 3hη{.9 Nyp $lbn : &bH$SLH$2zr5uYcya]kWSX+`͈TN$Γ頓s|uumd Seͱe!;ɰ`Y*̒TX}yz; ;ϳ.D=}{[_X};z<ƒ,}؂`Öa,}jQ=,}*8,}GuKW?h Ӿpc>]/4zJC޹@ѸodRe2m \;Qr.5ׂ!"5[6хzA`P:1 k礌Br$( ezZQ mtL-~CNQZl(f3߼,9ZR;(홾=W9ܞ9QOcb&%@{_Lm٨ D KUZ@S H7=HzWZybr'; :&EosQZؓ¿6 ,cUl`O_nCRE# jf5<߼{w+l^1\xg*`~`8@!>B Kg HQ5{z;83 c̆^fyk3Fp#;NY$ApQI{kuȁlP8sY->P^nz#|+lNOaUƤ(\>Ɍ|T@ɹLw d+0e#3XOFˮWa}Q Z(ڌ-2m|Dc~Q^7['⼺.iWͶb`rbmgf:`+üT}Rjm*22odEJLi3OrX Ubu$UU-U-eZ#D3 V: {f*R1׀S.U]9&r6ȹMtdXDcT) Z )$9o,ABg$ Ii7d̳d*}+t"M%| >:y Ɉ¤uо;d>˘ A rT%g g}YOࡀWigJA_FY iU"b pm=3{9A `q79[}'Ǜ3+E'.!熩oO@>ݙE7i=,܍hU||?ϳ3߫^yK^-,Dh:?U[~|h`g`uǔ<ߺ//n̸{o@{O-oc<%?1NB09!_DJcM>krK@-E}}J:_חJEwsYNXЎ;xX#/ Sq2͵ȌaBUm>ڻ'ѷWWU y+9KvQo| oؽvMAш^{ -hf Y8`HѠ-ko$\.%6!`BL jǬQFYJy>%&zqywk4LGtIe/%}Z~(!C̚qh]t$Gט0Ԁ1PTP:/E0kx*@|e$@k\橀qDJNcRYqTfi%1#AyvǓ [PIQ6cbVvhfmf8Q`Z 2d1v&+eVXxlg8t ?MMI!-@H&E $J `8Km,\H2ғċ^s,aR[̟J dTr}VPy󼠤^?.ϿZsL.ܳ"Gk\io{a 1LHcuh%fƅ?*b{.=6̟" Gq<e2]!ɐαe  ,t>FWRP;7yn<9U+aP&\:$UXOFEHA˪eEaXĮc,>.Vo#ワB zcSj`e_[-`҅(RK9#b:SlV0uS\i|yguJ7\BG~ݛSպW6O7ka|xf6>x dA̍A\[kb(6i6-|5^bqa!v$MuٿvtnaH0 qT;fpT(X$ wΗco1P_jG%hI֍Z3WLŨyrsIs>M\zTp~!tʉЩ=񏲳|s5l?o.x7/u^~xN? L+:(ۙ_w#@@O͇faT&C6g]-xqUS1HHBw` J?~}y3KP ䷳<5KKI0N,s 'h6&L^Hz$yeY vX \֔px#r+b#‚ d QS)@<D ]Ksoh}|m$#ۤ9gHG}ʨrHc>;wɝ03fSo :KfR QDLD;O:DJJ֪0b) @|0",DDꥦ0+@TD[6FNC'5.OF_^N[xd3hn ̦e3sB=?<~FS61+(NV3yx{UN=ܼWfrML_] <3r,q-Q=͟]:XmU} Tnj_xUv?^TTYet|s| RA2|"$pnV[[Zbmd=E鲧y&t||Y3pfǘc1hŁcgfܾc(fK7=8vQo:Rۆ}Ys"[:mېqFKd'I=0win[ktߛ}6L+-[:7ƈDPp$Usj `+ͼ&|:殡o;B::SjGb$Cg@91w7cޖ '-F|SMv"XRw}w>eF-nFX  S2(PF BPG" gJ?WA $(>Ъ}Z.bY%tj r'nϡk~Gv{ rUdf<~О5VaA*7'a2ENQ\k#ڪJjԆ6%t,mFσ< !n$=QpLj7YDY&Rx Q;Agm2O#c( sxp`%i5"vZlߔ(:ztA* W%үʛW~JSC%l*')Snmŕ. Fg`$>Mb ̚`d^ l/dd'A+,$Ħ೜6P8Ƃ I+T,jDJ^sk dDɑ rXv)42Nȗ(&  3CYY٦d>IrVro/@gz0j8G%7N$9P8=3)h1HQ-AK{"1݃犺~nG( ғ۰L Vɺ'HLdT1۬BDjUZa:8Ig?i)ARM,zLbǞ Cf`IӜGqTxbP]#>fLa\h008A!V F@i] *Hf[H =9"; uo?!9&3/'e"&>]ۄ,2eʙ)szcGXCvQ>MЛI1 6GeRӏ_* 46+v@).Ρ)&k7\G\ }9m2$j!)($ECRF!6^2`D`s׆2ZA^%e2FM7:AOP0DrLZ) )∂\85 n&d ‡k,<+{ɄɐeD~Bf-!JZU DE9TRT>>He2ЪxWwY/J:& ɏ>]-rzAHRvݥ{1YIZ<ϒL%\G8C}Dސd")6+Jڅ ݄PkӁSln 䅷”tH)2&;"K貊8j;U01ɘΤLY''kBT{m ,3OEt|92vD2VgIܶѷ mG6ہϳ̠6շP/z& Ron4N%ܢ`%ZA%_Krq~.Z;m\~G[cG}6qƦKjEV{KRt BLwCP{礵Eqsu!G(cMC!y+ T#Gˀ5tDIV9}?GnMӕu}΢oُ]?f߿lGخࢭ񌎞 Qro4KDv;U"] @;_j$FmI Yl(h p IwhgrKiGWd=viAYaӯ@N&\)N j~SjnK NԔ҅OiLbߓhIv}dᆀB峧^ .nLw-Z$MD|?8T%Cl]j Z1wԣ RT:&AZݘ]jgZ<@DD6C rMr aGXJ›dϘe`uNg()HSFk4a*espHuRr~{  z%fahpѻwaA{\!eѯ~%*F-4qz/0[-3oK`N@bp,Y捳jUr5.+zQ v%V'DUo-LVŻj.<=mTdFZ䘌4ZIϚ',WØCҦ)+j J?6gML^>\n.oF*MbrqJ gx7+%]3ӼR^p`JY)Kg u7h`Eլn>dµvgɲsŁ ~gesl|& J&Čbmt_RP )QnfE{Sђ峗NvmiTnt?3 .a36iӻɸJDHS mj3hm=L~Tǝj >[afL0'-7!Xd[_烮1=wjʯ;9y} KI,,ޒMň&s#w;ץ]tg?3Qa0Fȓ.`lA^?6s>&@}dAVB[C&V ;^/&' <ɍRIkk7P;9=wM䎋]u]2'PmHdwԼ!`V^ēӮm"tskM]ɂ7DVQZzVkGnLYh`ţ`")0Z&fK#Wѓz힪H[c޳rhYrr VN Τs.#mQ,g Ue]κpIQm:ٝlO|yÓ?{u}T{2Vf( 5sQ00BѤ6Ah2ybb P2#T'[l9i^&!aYiシCJd2$L1E6hB $tUd9d!a/2v$ch9cA@rڔ + r$}tHrqF:!2@,igI;K!%%B컼LJ4TTGDGCfGM]_s3inM(aA_w6\oF$q?[cni?r߾܏ZK?*Jrw\ti&ƧPΊu9/~B^]&^}>' ] ](VԡOSt! ?z7b3J>|{qY3vl]~Egқ/wZm>iz3Nf>мoeۛE@)^kix__4č͇xIji s} ^ W-{3qG.Q4+쎎0݋Z;;xZPLvZEVAǃU-zK–@y'ESԽJ,u,=+&ۢ; }{ʋ?@kpe#)л^)n n˜^.^b"-+vc]`0vk0Y+-8-[]Av7^;"mFae|h[I*JEbRD!KK$9m5hbc./l)^ojZϰ5尗ʸ9ʵnw:d%#C!\>晴TDriͽ\$BJoEZUߥ!mcǒM")G}v]۳08q85J[ܒI=d[oLcid2 `,qh%p5&t`<5- DtJE4 Oޠ17݆NfÆӋ2Q ~yE#|6lO7޾g}l,2=7C}~!'P6dB ֜JHQ{HCrg! NHd B0ɨBSQWC+QWNQWQQEԤ?Ur;o&tټY cg#̡h4L]*yIq ?oƣH9Ź oQ!.g^Pr׻rrtȳ!ʏ4C|>.V7iFss=%4~Ey\(=y34ӷy\_ZY^E^j/{`g;Zpl}Y$0 |xfgER3uB\!xr\Nœ+Ԯl'WgOS5 +"Xx*>uEJqJuNZDiOH]頫B.>9"c*T <lϥ @=gU;r1]7_9snj0hRkE"u ~-d:Hf@z8-3vZ^MwtSGwcb>zYHP>K1cH$ ]"eaٓe&[naJ t'/=^"ϑtZ,r*˼uG{ʢ2[]&i5YKzK$:Ό̧y4JheJD˲c^ ^gzM}vj=xIIPo"G }Lb:o׵R:UI#=F@EipH>P 3!f9ȼ @\R8 Z TGHMIJ!`a(\zYѺ`J L|IݥI R{i4c%ջvuVx4q; BܻK! d@o=纍^fD]lnq7b}mwP͙ves\͍vo#I0^dAQ%UT66'B _n%PI:Jrk!C h2v5z@"g9d-S .2EHJER*r> 3QrCN/UkM6j0Ql] (7bڀ3ڀ!kg?ĈCڢ+|,J"eNJJH6Po3 u۝hY̔l%Z(SPJ D{5Nt+qCi6YrSX3؉F7q*OTv 019_\d ȓ: tdd3Xn3pY`gWNBUc<8֨`DڨקɄDr`,M,SZY9K***P mT1&6f-f~5@;`fsEKQIW.Sd1$T)Q38 p\H:ؚ`U&IӧD+NсK+V%l&:T|1*ZR4@fPf' <3n7?_RM Q: F#BMc)W!Tx2Q"{q4qׇ@!60=\}Q=vcTw=usLHy&}v3=ųqӟ\㶀_Շh}7V#_lџ|8iS^G=yQBU/oz:~zz3T$S$Zu)EUZνHJvP$*f}ċ׾v`ǩ漫@^oƴ\SE#߻54ak;Mm=tCקNsch冢\z-l{^v"\!gkG/1A(Ĉ KUjWgW.I&ښI⥈nQ;hhG=>Z[Hy*ZCoK5I)ɬ}(y5R4A "s="{mSFTmA8Tm eSd_A )m m4):'JTmn$/ޜoҲЅf:3^` +5 I 0JѴud!ZHNarX`r ZpŖ&`Hc]C /tvlL) *+OQ!b*& f>ZXس@6^ a0b]=TS$f+f:N 8Y-lgC`K;](zLgv*C>턓b,{[M$"e9/PP#݄c74}lNיt*)SMG-Z3e7b ӋH:yɨ !;ci,t)M/:\B"z5dlhҦ*}]EZ}d}g( Չ * y"); nRvbTqt7"{wRJ9t7 hDA[=29%2r)P3kӇ\ #Oڞ $m)%p$SݪPbrN+heMP#'4P U &O'N>?le U%ˁG=l9/UOM5O~ݦkW+BP>IE׊@S˼wb NRD.YY^ͼkO}=\ghivdo cNh_a&w7Uva%O:uV @h;* bCT(w$"j>FRզ]nU+ĩ S%Nm*څJ^΋LPgw )+ŕNgV~J>65[{oߣi)beΝ /&<$D= cg@ߐ5}iڗH:bjBʂZd 2 ZBq ),s f'6](TvhcʡPE**,}-}E`fzu|CW/{~ݲDݡ'R·b2{B4:~ǥ~  Z(&Z8 2S@E r#ւ )ǀA!54`YItN)CV6Bo[qݬאvN|+fy"fisn~+औ ww{`"P},sg0ge>|v}( S}ɳ*]w*=ub*oGϯwOϷs<3Lftw<㑫O>/9ymn}ջu~]!:СOs9e#ixǘ]u%Tsu'nF1?ES?p43{d5?&Ο ~0u[sg"3u#Nk]Nrߕ1dVs&wcQ7:č=={~ jrCLf8iZZ/Kp7tYG0y] +a6d̸o/@fu;﨓s45_6`7gi7G+|?ȦgWzS~2 >67-_) =rbay:`փՉ k0w6_ CM<姁G^-N zQHm o~W (ڶqզOo,s>/wN^nc |tˋqEXZ+,uN,/dJW d xdktP9tx2+ePef` </֝Pi-XW!Z}}w[^< (ÎI.*{3R"!eGٹg-]q/VdM)I鉰xM2inMV l>Ͼݿm .vnoyoVpi4Mg:~)$CRk;gf:XI?ǪE'5(J: HPt !=IYGʲge.ޟQ=?z d4"`(Q;DRǓ6&<]_*mK&=y|Y$ !I6)`LH³\Q_ 8d#9»! ]wT&]k_*B[V>0\ZBs2jlOg3 °Mv>+8Рu$DŽtR7{|GtPӷ]יty\YxgμĶzh뎫^j5$df޵#Eȗ\f2؝`2,9j9sjdz%$MlVկ*$gc)&t4h[ (WKM)@G۵c( | A[/5^#2lxmOtct7BI PUk˫-C!6dIwl’w|g]iSc0)81PTP:/gNw WwF,r$0Z ܕdŊ@mR L*`)TD2'TJ0؛Jb,#F6'AcZP a8o<c3.?m7ǼT< =BS@`)Cm@>hg"[fVyC;Naմ #)etV$tڹDAX],eY\ :FHOٖ{a5K.h JG¤ N o̓vlz#;?k/0XW{?𵫅HSL0>jq8$GqR q?_%LwCKs=E?_@#"bL&W .c 9yEWRP0en]L5+aJN\:$UXOF0!}wY9Wi$x?gzԽ{$Y}6D^ͽKZJ)aą۩ 8WZ8|]q *$/PZ9L-=Ӵun܌kzz"8{HB17Kzn킣IU͵&wBkGT:Gi G,.w> GL4h8o&ٿO:*AGmmԖg4ZHhRw%ȯK_Q@AthTgYX5,ǿw?O?u\{h LK4H$<3x0e54 Meh˸*#}$Kʠn_0?Kvj'm:͵fvodTrq.*BAJ(Om~nc 1_16Hhqx߻ǒ8PkO 6&8m8DUs!]HN\  {WTD8 ny*J]jAI+Whpe9Xnk1˻,;'u ۚx g=Z=g{×f)|i br bI.d^! K^hS]^T_!J•|wa*pp3?`AO#E>²TLr|\&@yTJ< 2F@V|V)%rfl vyFzi\ \uBRoHA${8tt9mIM_8ܛ J6L^~/U~]^wWa3@󢚻6|gr7}s>WoB"={fԼnEVcѧ~rNzB5'C.O[INQ9KE=K1K.q!7ł9 ˭.":F ;@&4CU xң [S" S*DG: P*IR1qV{t30m4Xp4G'h"L:h'^zq (V| K3u*[6Lr%໫I۫?,ѾںR~ ܘ<5GO~c}?/,h!3/g>V>[ mTZ ^Yw,S({.n| -̠(V/x}SsAL$=B U`p~>ҷ͜1~h՛5Z &_z}7\0NRB {NV~i[fS?y`|FT&R'XS:f-* 26ڔ:Lb!I *"jxT~8!,$S{+%R.j6rhJErI}tlf['lm8WQ-<5,Z|}hÚTmv5rSlvKZχϥ^V=JRKl$KS='=& H!AKjj3EJX#_(#`1 ).}Ԗx93Tnɘq3J9,̶2* yfY;Yz LnY߬CMMe 3>t`d`ljVYl{ɘ@ Y08r""CBTq`/ĦFӐ? D(Ib yI=t$LEt!"@Y%v6rKl;tgZO2ԖlQY1@/8$$`&#<19|Tgx㱕< #$$C(pFBb zH[XA`Ir&$53rXMiuON;kSl)VDNyN~+VU1+]ґ8{(]Ѽ7WC+_΢٪sϜ &c f]^r-"ZrV8AfxmgHs{G{1`c^iGj )Y@SJx->cBП'x\] ߬J;Hu$H%a *L/^L J-Ѣ"4;˜p7uoy!;ug2>W:Aeׁ>3]ʶCZGBZR ?uՔo@0sDᢡ;H;J(H l@!phB99Ts5 +@E#!o(9&r6ȹQjEw9lVJJGF;09AABK>L,+EctOIW/[ERG115͙R^Z|hsZ($T#ͻ-%||@f~Kb h(xFoI+"Mgޣ0k5\ޢXH>"k/%%\2N,sΩh6.7Fv2q +x4uɖoJ+?I) *< JIĤc`-FpUfa)zFTZ7.6ْ %A_o;8 Gմb4uR^o'iUC~(@Ђd k,)PO+LwVͷ7.!ǓC$\]$n cZgC%(]E ҍhu_*zGzX(vNrZM W%$*J]^J?q0SO<] aeїlT"M6BUi\uF¾ ŒyMDS]@BpiXL _i%ΓN)*VRIg >G"`"RSFDD b FрG!eLD:F6rvL8>j#`h?Mxӌ+n/nkqH?3e -VF&E=u2Hjc\"㝥yX.`1VYA>``(3*RjtۨQ1,%! jC&3g#g;YGIlܜ;[67%YoSԙov$G\7?sVLbUaDJZU8j@;lj`rdxcђrT[0x(ؒNY"F&mWG8 !whԑyTo"5wrǧ=l:ƪjvKS-{PΉHZ9r$QWܚ6wz 0^^5.uuQ=LfKfзgZвI{A;nB!5w7;uϼ˚ߩ{N;vYq_慧NŶ<~t['XnsHOWv?Jrl#`ϪF <ۯф}_2X"*4 ځ]+IAZ]NW~:;9K9ˉRh0~5nP FWWI AUlb0ԸO}oha _ion o]W74fX~_mO Mnn0<ۛvM>؃Cן?u'BsV.y9?{Ǎ_iɞA`Y$9fyQ߯4=iZH˞&,UzyU2Q"_XRZ+jDMdRCmUS&Fڅև(9l^=HG1}ױ,§:Be,c~Y3w~깡M:XE!3"V񨄖BI*E2nKäH# pd݂Π˹*D?>hKS$1m$tSv3뚷I5M,wPy-`rC/.Z^:S<|Cir(M$Dsq &!(!u*X( $G'_u|PC:v1E*ҕΣUdZK@hôT-P4RBT\f B~isݕYOF⩉uO DF&EeZYO # тKkW ƂO|4BHXkB qVHE-&F, xP JIRBH}5JߑK8 VJL@Rjeu(rS #(Lz^ÉNNVkќ0vS.ߴ4)U71xVG+Q?TW4]ͺg3#~luZ5 oM`L^s RlܮvP6i:i =s dL[F[grLmӰiuudy`($&Yle(0 +F\HX"u8D_Apа266.^T[u^Rj룃2ZugqځK4&MMJ!f#G*E- W7!Yc(+t:(MҀZ$qK8%6[(+N';N3`³xw>9x妴/][tv_ Wa0>L+;K=yG^MELZpPמRUϓǭx5uUG/Λ a""CƤ@~MTHjM ;G[y '(f^uA|/e%)h p;=1BI 2h5vo^] = 'k NF!Rm1Dx+!A]SVslwާ Zrmu}B`@M Dɴ1dGbIꙵ*p2g!yVGNz1j^Xb#t_ % fur*8;dƐ) E'A6[r.*?r͵uޖ2px>vvQz:n*@&mcH:έj3̟8??|q<UZcE6Td~} ]4)'Je 57np<2V8.#Ï'OWNI-?笤sG<4cVgOQep~.tZCݟ.Cim|Cyk>i\U9*sjqu^f-0V3Qy" տ~wNz.̿#k/YU98aSvn4[}Br[2̥.s\2̥.s\2̇gr?U2̥.s\2̥.s\221.s\пe.uK]Re.uK]Re.uK]Re.uK]Re.uK]Re.uS0)3 BQ&x6Q&x֚CBp=F};QFpbՍro|Zw+ypPHtYsM(RsҬc\)䁧KP ,y),DDND*B'sI3 E=T9;tB&8@!285>DxЖ eV1'XtGjA8P8;$b4 >́f5vuw]BY53czayt~^IBN(A IpsQ$.LDn`T&o(U:"D+g'̤֌IP!Q% ]g )xx4NR׷~ol)r^Q4',Q,{r_O:RjjeE`6ڛC Es| Q 5G#Gф`i$ nN.Tn<`7 i mԏŨdbo#ryoQE7U'hu B*X 5o~.be9 =nt"[kq ҾÀKϗI<36cfy~2}Ť6.j V9kuT_,m:Ɍ )x?ػsz{𸝃-.^un6b{sl}m{cUs0Zۍ:5i J}5Y-K\,ugzɑ_il^az1Fcfe .W$dуL([r*gRd+nyNmݟ#vY>T_Zl2yYjUǟG+ڼLVu::RN][D'Ι1JkG X#nP~8bAjvb)\Qa0-? z%'I&8:a}=.Kn$ӷ.OMm\ECO̒6ˏv\hd}lz* LMת0+Mӛԛ|L@MA*}lJOy˟neH{UZl_n]?599M3̦M1_N1DMC +VSgVcecZ#һznmzK /.wd%o[7ݘ|4+wLVxtOˋpi!8{SfYiFß--2[W ;.`㑹2yiHZ3A;33Y)ә3{o;-xԅJ+κ :ikUnxrԴ-)o k߁^w!>&ɘ`M6!͚%9+Qdn]y7*lE1S4L f8($FEtꨓD %~uKT{ 6gpFeNJa#>t QBFYa#'V zX)v3iݛ[ "t@޹(Vɰ!pvgdjlfQuTC#Sk·=)*1 `YGxLi0!7Z'8r&*Ek[V0>OWt&) ǘˆS Frj4euk g{T\l1U*45{_+(~D۷!ᎊS|\x`!Mb 5~yF- k/ [Z"APMsi5'M cX0^!)ͭȌdt=w2&O:gK#ЎqiTAhDM8&5ʒ6%cI2]gՆz+PŦA-)oϠK@$&ѫMKCPDMr.0DPzD  ZڦUU1ɿCz;fVFS^˔ hǐ}Fj"f%v D Ԫj'-sHCʺIZҜa<QpNH&Ad]䥊AИXT;4pT1GkPM&kX#`^x;M1 !G xs%J'l@'TѴ+"4IJr(_5yqQWcBZ=a!b~RQ$A::L٤rf4K0d>4ZT{\NJ~@t}婚?hɗDɒ[$$ܸʍUO6sִwxzE? DhbYN+RR76%EQJn)) Xa0H"J9r4!AqgpZ" IdVdE%L7҈yIÜ嘒LR:0 ZR"]kpT^OKm>bv&dHxmR%P )1]ȶz8UkO֫^06otL:3Su'!YQ_3O܉]$uh:vD:6IX7]Wڶзm Am9 /.^tݽ)M6;ίrƫ;Ƚ.hP`)֒\2Uw>֓.xOVj2@pI cShEV.g-JfŘ*x^Z9Rc.1y. sQ3eDLaS}@Y ;@6J+2:PeQsLOZxtO .:z!tdh~0t/D~ onΨrX$hYeDŮm gQʋjT[*Bl U/K _]5 "oS"><_bٯbe G%Lf ) f@,y*'2i(8#WbK+34G% )ٔڠ!1)3BY-vla<+i$Oڦ6j7[Te` ^ 2211" 0C*2l2X|KRMǚ1g9cBI.kS8"@H!td)l`7$iб57MӚ5M;@)K,6Jzou`CJKVڔ$8Wkt% (`2t9# ^U.mAZ~+>W;ײwJ_#v0; lz!%6fGHU&_3;]@p ǓX@\;6{3/M޽Ovi*.Oɷlt/%Ozx-Îv^IKW44-䦐eg5%|py;Wo0't%t?'r ~Yt! 7ַ?I^N^x_m[z;yvޱ fi_xQYK3`ǟ*3/ί'< Pۻ̍%$org˓כxJU=0# (ܕυrBH8RW!fE&tQh^* Y9~Je?c9N!1BoIi E-<"S\9 %Pe¥T7F).T)T(d.P(  0Cц,j=- J(.viW\so |0UΉ#&h޶Ab= PD,7n TU‹}_?;/_\^v~HB#~# poD<ӝVnd >e4BςbΏINZ,+] O|INBoc<&%ȅs\kn,'<.L%rof??no+n(P BL&9\7ֺ3FOdIKݤdIYzwl)p֓G_&8x*K{O|"ٸ?_lܟ9{9~q6Lո?glܟq6$?{lq6ٸ?glܟq6ٸ?glܟq6ٸ?mܟXc$lܟMӚ5ٸ?gKVd%c'HUBGtTedh3R(aoc~8hL*nGgUq*nUq*nT-0<@`q#+q6ٸ?gl4sҟ~!X`8I5v(Bn+v3ء%$08㉡ʊuƬ CKx B ϙ1 /v|CïXVO{|r&}D 5b}&W`bLˏ|G2&咧 -02, 1,q\8-H,Lqrqyh_~8#Hs6n=o?T~0wfŖ5izsz d1F0)cI+w)ZyAZ \A!ٻ8n$ emo 7{ví!Ţ54rf$;+7/×-8s g90e=pzg⑙14č|U1mj.NUwU*k.q,sutDZa;~4^zckː U1}"*e6&qtG|*]R֕.ɀ+ԡeAIp;qҁoea<.>(>"> (HV-J^A, U(2OeNA, P<Ɛ$ɔE@\Ii#l9LWt`UMץc9D#rOlvzi .U۷[M `:~9yAUsQi b'A٦%Udƿwx(hel)  Zw 7};tw}weiR.o2]vVkxkYw:H s(A&149qVeU5A-n*\BmH$YtTФjmeRYA "vgl˶=Rz)ڤR2ƆUk[l,ooWfTV6|-WXMn R(v"xauR*)c `0PH 3Bt 2AfTY~2rc3$/uٌ!54S}xz200y#F%؆i[9wM/Z~l7TE4yǡƬf̺/ M'mށN)i#"ƘLm2%B6?;`v~ªiF |fq 0$)X$-d]"oLa\Ve{FߤYBf2XZI%@T_r.DD cyej}vD?EIhOQZWؚ>H/m@*4%$+ XkTa)"ꃙ5ey=ܒ X#O at͘Ym&h^@uN1,x;Ɨq6ttrvB%A;8A*IFR$.YZO.Ws^1ǃhrZɗ}XOb< }i=u~ѧ *ʾY~ZݫY]|Su!\@vatG[w/;ߝ.. n"8>jXK"jNZodG4ycX;hϛf2vLǛQ1Š\,3MNZ+XWjrgsǧ_ΰqV%nuf  լzAK[>=b}ٌWU+Z<Ɠ?v?y}w}@p?k? FuH$01qjjo9fmoXm ]yo75hǸҭ-iaM_3q:]h98ɣ&xޟw<-UFI/VӸ2qtM|]s7J+L2@AW81IPKkrVޛmы?/)&"nkDuzդwNϦ\ Foa4Wig Qa7׏.nMWVCb\E8?^zc-ƟN58Ms48a AnLI1Fd`p< 儁v9aJqڠh-*̃^kL LJ%BIPbq YЩJkQC,PtŰyiJ2I!`Q)y_y.]qsw] ɠ 36ҭ/(lSdzٖ 5+Rۓ欳ݟDG AQ(PwElB^d)Hֱ|. WYmGGvwt|u'}Qo0/ö f]BWwgƵ<oGM{,=lwT`SF-v`Yɲ ܲd훫 dZ+Tjdə51XlHɢ2=LGՓEjmd$]ѡ@,JXJyD%ɢf`a%nSV|B{AllF+ҵ c6I&:s7D2O WE/3_W"ILmqjU.Ĕ}(QN+<tU2ZF).䁍s;܆QtB;&䬊Qd"JNo0ZGVL$LytgQʁI2UI9/T1 T-OB;DUl` Qr~0uFΖ|ֿdl[6jMݓ\x{N-iϐP=fp2$ }::P-w Z:e}!N57&xYj*A11B_0ls Ū(juh;Zia~NdPZh2Ȳx40mf^lO< !eScS1m2XHy )-HY1{ʠ!H)E|m{2T2ߪU"k*N)1^""Qf$!!D6-EX?Gwc |: wtCo}YU^dxHiCBHe@l`-)EfWwbYgerc28HM59b1jpV!)76$($ !)Q2T&,< iжCޒQh[o+9j{k6D $S((a))D02Rb:j+rƩzşNx[W:vW+]R{6n!Zx>VLןTz5ԟ|' ޺ן~!mk?/) cm&9cMTC ~e@bJDu&8UIVcuQBN$S:)Q !,U0ڟDg4X`TL!u쌜 F rKHؚ\wilZ֮{hov=EDKAj43DE'Y(jXo-KXC-iN;zkgGmj_JM#/NJ"eAiBJ"[L  3*,zY+ϷfJiBs_`OOTUuwM'mށN)i#"ƘLm2$XV^rR猄Uyq6gS}7\`q-QZ-W忟"‹)I#ے5t׍O߽cZ6#X!R[$%A2IpCāʄbZgO^%D41LCFJ!Km=.0Jgs,E-$4,- wlb}v?k4ܫrxga抛xD>kr'$s\f\LzS|0g_rYi}O/yN#O2ap˘,ISGܑ1PS< ]OH%$dJB(f~o/g$y".OgN.ɍW@qXVQM ˧S(}m=uǟ"dy۪c͛ _o#k%x;!757ob~$]qB*$?^Cv(apcNփYf_[w?fw~no/[gwqx$,|JlWEF\A>$4w$/kGR # fkY],c2`yENzr=l8]?9EQ=crB+Gkp>#ǨS-N :f88_OO_pﻏ~KZ I1#A@/Diҩ]5g=໌jNy͸wǪkC\֖0Qp2 ]?eZkU<,QGlbYp2y-UEAV8u~nUctMc']Bc(ݢ~Rȭ@C 33jN2OrXLʼn dQP:a!:XVu!:^ߍuY'-9)Y,D"B;dqꒂ"zuSecSݪRm흗r=efM\0=I RLxh7^W/CCm5ARp&,/y>i- s1(|FbdAyC&BJsR9AekcP~KXJw9ڦ\+sW B.Yk"JEeT0d[/{v܏$J21#ϿZݼgnCeOȖzq A8eC ^%9˔E%hGFp=Va9z{!Iٴ5WloPI7PL{mo<}Jk`%O}!w Ft=ឨ+5דp\?~bZݤ $ӸQ#}RjBoQAD2} 0qC9a0`NT#j—C06& 1Τ9dW\HJRdYk(ٽ>+n"R 9p,,X&sm;9RKݞƉ7Z[L!mB9{'J_ Xj;}uswqn]Q8P\_ `DX#RJgb YF&sJd.2\9;F!`CV'}|1~qwd9Zx){B7+'aldfŏHGOW(E`,eȒT0BDBEZܞ.`\gpȰI\PVFbi^lU^/&Q-',59.[l ?=@V1=B%7+Ԇ1#;.OGsbNYιkBҩQQYp @Q c{G#9Tz򧢥i\Y牡ΚhLlY`=:= eO2ٔBJFg}*\BRdgFea Q|ylwϙpfϘ=c>c޲{]WOX=S)6y]7Ӛ%v*Fڰy \lNd%m[C7ݘ|RKR tnxa!8),6D'LSʆ`+dž=L `co!hIG-B]ǣm]]'c֡\3\yiPĒ|D-Me.Rvl>#wxF%o2!u  JG@戮`%_ 9D#rKv?7=X:wf=-,t/}Fh j7֛fv*U]LgTe=*s\)KB]2W4ISʜqF5M'9fZ|BJmx0q!+ӏimrmkUO^Y(2L&< 2R\E31`5nX,X4=+a4Ae2LϘŠ<8$8֮^p~cE~f|MV!t *DQz45q] SEK/(eGx#Nh])a`΢CcHkX ;U6AFk{έƹ29|vlXVS"hD"%0Y2^@{S U,R8-ĉ*2%k=QdHBdu@,Vjg-_}lڰPKxiּMD)l%H(b6yID&yxFo@g,K=`r׿9VT #);A*>#6Q`"jUZ8=NZ2ocdC$q#6a Q{\x/$|X`fʡI􂣊Tll5glױFl켿Ax5]e\h}($O#jυ@٢ t!TmԔd zѓ0_%'qQ?;@T)!xJY2$"&^MOl ѧ7w+m:ˮ#gD7Zarߌ YH^pF7|&pe_zykzF)…`kMp!"Lj"\O~)0|]\ @cii#mI\CG7dd%}]{IkYYSJEY0ia]JdȚqޛeyUC9Α)3<fAC>梌0;ncTvV#gSRPU `PR - |o8vq,B.n5_rVk;ˮM9FNw]r:`۳9`dRn^PC8M鹲#a}EmZwog7Mln~>u>{\1^ͱeg^ӍULDl(ozz3aN(ZDKr;hB;hIFKZc-Ӌ~9U4c?ckR}<[DcJD%KC}HڧSRuKa#qL$cd?^ԼC׽[73o#MŖ=~}6919 zDb eHч8rYJd.s>w*qюàn֞Mo)dzɡ`lO.ɌGi}ؒ~G轎2BZ{CX*) uJ-톫Y39&.Vy^Ћ#B 1B*i-pT 0Wo G(eZBq\U_>,ɖY$eY$NpÑ,Zf\wim4uMɂ~ti JړC&0Zb$r >Dl3Õf p;юH[S!˨MYr2ȉV]݂ `\KϹII[#ge8c[]ZօӅ{ՅKJߦWesfpN$ΏR=*?>32IG%L:\0 f@ =Jhe̋d4dkUcK+30'% )ڔD AC2%ce13BY-kyXc0G-Vkv`͹Ґ&P&ۜ#\b%l%A'OIIIIF֒ì1P@2!ÒDC&!Z 8 >#cҢm׆hamԗk~¢E#jD۲FF4bYΘ6@f+*ı rQ!h5jn$`7bmk߰vҮN:I;zI;i ۃP,o ^OxQJVAӤη`g۸uD]a^kNgx=!`gui:M-;HpSuoq0عŶCZBZJ6JCBVeBJR1rHEY٨ȖU0JBT]|TGT\ST\sY94 TDHT' ,V,s(R(IJR$uΐ5E!'6LfvA[<# ]igṸI<)}8#.z4<˛E1s<Ǐr?r`yKxu!hE(I?V3Ιcd(ebӕ׆E,>!g 鶢o]pS:.nxiD,s&`dqDcvIqkb K+2NHFtIIM䔊ADʸsO@TRm 0fޢ1w'JmzG4g{2K]_Dx/\ijF#kG :34Iy >~(a\{U|0޿g岸S5y|Cm&O2x׻eLפA n()I^\gFKe|'))8p =g1 Khe"3&1+k~|_;g \[ί#0|okOxݻ"O՚իR%3)t0y7z\)e1~dR8֧ӽHkw?gw~op5=_x ξ%j=^sy?;t7c("Vv5J>p=0vgKNى;[uwKM͈f)*%AL8:FbYGb'7m// vN6W0+U:);Ұ"h˫&rx_|F-oT wK5۽ؿ<'v￾}?)?_޼}υ}_F`Gn"A$s7|Oۼim5M-6iZoҮm ~+>n4¥5rIǣgen$A7Ef#K<;ِ/:}">|UOWD%hJ3%*Y6*g愵 gxlLvۊ3ɺ4Mj(ͶNt"k(v-EodSV;Y80_/{&иM :oIFyNW < UAT,Ϡ# :f=UڪqOf%DRQ8C2n10 ,Nteu.\Ĕ1!- 5vގWUfV󓫵4_sx:Xks"AAeC i7b4 >6C3ْs`PHf#c9^ٮy]#L]LEd$Z+9MGf@AFy4:K8U(@I%`܆>d3N mbdmY3s-#0e|.ǣ/ąF+qӜo]w6kwS實gVcO Y ,xp@LQJP,ZɌ>"<̧DrW9ɰtk<([t,w9 Fa3zn*jIrdmn%N x<MsS|WWyʢwRvEgYȲ4iuBiጳI]T|j=wR.07wTT?'Q؂ʣ 5VQz1*rtA:PhieMqdQId D|QQ(vDU`ReHч8&slX [6[#gCGL7?>pBqدBٻ6WP~:=U!9yIUT\)(d%Ņ EKp@k˖,g{gz/37/gQn7ͪ}[_mp4Qm]74k{t]g!(uĞ+~˨ U ]>Y]}xxȠkԮglf݇=4ߌ|{Jw<jxoA>s'-q~룖{Q yͥ].CstpRle[b!iR0IK`bI\l62(э妫.2y}jSM7NlXSTMǪ^JcBBTVjGWT(Z2cՎWXCDigLYiK^FU (4mbΙ#Ysd8,EU=M䇝o-:0D:1J ft0hȹW=&xZ0J8Zn5=Y3p f,1tH XTEJ^q!h!:;neLl0W3DzZH,ȉT^(DaIZ[&U"!&%mI2w~Uv/6/(j y{[bѐDK'J 6%5QBJ"*Zt :SVj%QK3@U'7O5PwK뿵ti[Y\lQd\321fQxeAUV5O;,$,"ȅB2 *&c#/Mt2sKH3GUA581`ԱFz'Q 6]tKuQstC!'m<\)u1ƥQ'"ODHOԔ6ZE_h|\5x<s;ϼOEM|KJ$"C0gz4z%c ӽ!ZG08ӢlMKmV/F%{Z"Q6 cleb$7y`{z{>G\seUǐj IQ= IQcH,DyK F Lb>EF;Ύ)k-5~Iy],¸T&0eC_''CD^iM~0 9":irʏ6#PGw [{]c[axx@4ZJY tACY ^-saȍu4 \ $c;O:Ӟd&DinBBMJv)0Jdd kUɵtJDc1QT q4@W4ĸ=୍(cM&y+ TC{Z'wh־t}(m]Ձ.->wy!W+`}vGvt!vC*$Rj0DRH^HH)B"8ٜxf#6,yY?Nkt6j$,h@Xcy{Vѧ{O'󄆧]fmaWf5SZs߶ƒ ٜ}nܾ;\bZ7Wir0ӤDaҍ&_ާdI6<6dri[hoWk͗ٞ"Eճ* obyj>B/s*Ij~;V:zAjPlcq3M:՞>jVij1rUU]Ո :$l[X(|u2#zjb ]U}!,h&!:JS8˓6BLd9s,˸Bi2Ǒg=gոߓz9Bѽ~_M9i#f}_zvY^~zz2>|oW5.V*h~{a1@c(Kj@D84촄\yR$;1z9vgVOXƶ-#9yΞBBۣ6}^6zķ!Nɥ0 Ì7ܦ=pcp[,F3'F]JgѫRF?FfO1Lu׊%eЦ$gJSA^/\9}ie U>A2 @Ս9`Vy( %?D!'B4̄1gP]jކ楺k#Sc l9ھTٓjy|(N.8U?ˣMtۑ6`ji3TP@ZzRL]Q[!"L̔,Dѕ'B2:ǘgx^ m웢NQq%.HFjtg?R.4T,X# g[~*{ʤl,g-.:_]иz| #)x@Wa>eϣP %3 dNDä&^UĖFfi(KRؔJ^AFcf6U9̈eUFjGl3;eVҎSQ[WFm=`VdJR&@i4k䡤 :2t2蘔 V@6,v1ghuH-$ !&K-1+ch2gaS†ȅ肴.E})=j3p!+ 5ƐkOh̢v[-qc n \(hs71lw`8:z4%#żŴ w s HKC{P ,9 Ϻ< ;@S*Rk+ [J{'Q<ǬUY7&y7aMLr W &NCɟ3a2Y.4DgavB;ӟ*+`Z[M i`HEaa #LY+˸z0pE3C+E:Lpbw\z1p3G եGk,\=JZ/W~\N]z V+ؠ \qfWC+ w")5•UV?z W$flWW C"m"#\BnͳSl8쪈+PH "#\FFtޚ 5pͬ(GKJpCB>-oo~Foiu4_?Ffo̼Lc&X2kgɟ~])d~.|7%[oڹXTio:rdw! 5ۤIߊ/y'Ŀ~Vzc)f Z6潝3_]E?[X:-&K(N1NlfVl;+RYrS;rER)mEw,3h retJ{Ogϩk}vwoVogꟖ;YWi9fӲEg29-f=+ߥt&N\џRI"(]m(bCbE`XU8*V5TH9z)8a*&y1"v?'p!1EOK 4[5}E Hڎ  pdžl-i AT @-z|=7)k'sݪFRsnu7uڂݩUpU:ҳr}bNP:>5Cn 0xLKyWu2ͱ;[N}Ew҂Jϵ͏y޵ƕ#׿0&Y|,$ |.>aadIZQ{jm]%]vA-uG:uwH>=oXX,:jcc\uGX< +ܺ;%z 3!?>yį2+!إgqߦubѤPoD'O3ů0\x9&G2gUV|_m}sw|dV}w;w;ϝVȯmwfs̆l=xKNke`by<>sg~ϭKsKE5#sNsET`'B]_)BV7BߣGȧ{ob|a@~Lɭ5琻 SsKz~4oJ[}`K\1~WO_O!鋇phs,)F׾0`XIFG\|!c0`oXka%_란`N((j䞀Oh7k/Yo&BX(mvmq~^& 㛷k%@hLٮ6ﹼb4,E,q j1Z~6xusVFN1?-~=_f |}7_p+~/DN܎^/eR;RʟNߎ^3>L;JR%7uݰk䆕K%VrBC'>F}(pkհҊ+Лj Փ O3~cӬH4+iWFCh 3ŃaӇWڠ_;\ +~@2ap W07*(p5V?J!φ2GA0 lmuşܾ_ݽos?p?]izY۷qPv^Q3u "CVtzz^rz{B`>l_.ޣmjy~ _;S:ljG<ںխE*'˛o{ܘDb\z}/z ۄϮ^~ź5x;Nױ=};:Y"*yki~FLDt'>oGo5g|d0o } ~N>9|=t3=y8hߪ#=>8cKH؝K"2{Ӟ&h=t&wsҾD_1q}W'f1;KϬ]kI.o^>+߭w?_6yӓ.JE}qv۠rLL LfNʑ.euu7;s!H՘FTT2θrT2p`նU}*Ͻ1NTgkgtZs l'_juU*bMԱn,7':dlmNb&2lֆRkPÊ3Z(QM45S)R.]#٧: jʗzwԌR)txY55e0)ׂiR=F$\=|1wf c7kx3(fwG bzlآ^rJʇ&<l5OV'k.F)hl(5!xe9w F!i#&ϜK@#;^bi$7[t/> /0^WU׶8M}dŁSѦ8BK{:cgdy yI􈪢j[oj*0hV5thNU ף(]8X^0yN<,QMѷ5|'&nѕ 1oD>% H/*MYQY:Xs! 6 ؚk3.m|$FU'5͚ BlU uAZM =u 5 S" f9xxN@]ViF !Ԭ;v4J2[0P@"MhskCFnQ xi v,*60:-!x8epup;Omq36* ds,|ut%ne9ͤK 5[]Q%O@\QY`0.tE44gI15X+3\6߀Ը`TuՓ*YlXkYވ m $T jXKy0y3XGfص.~+4޶rAN)0J8I9nX;ڳVw"=dH_(6f[UW>#x힕u"F_R}ڪ]ikM>؈EI8(il-@%#| $u{PH&n! έ>i4A>1nc3RUnjYɉFŘQT5 !`BDE指gض3Q9lw^y[ nYA'_ =Z$ >:%u`ie`td)@Js"Ze1%{$;uŗX~EyjN$\Pd"Uk C 0#icm͋cx$!t_6 u"b[*x (nGf6 ᣇEUfeS YߨisYpۆjbQb$ Sƚb~<秧{#"y4\\4TFe}c6FAK|vCMASAJ=.3w M3uQ׀R"Te` 0FϚ%\-1ւhg*I+yQyAz| qq% ;e30biu:< Q 25{sE1pYAd&UQc4(?zP*Z3jw7Tp< JØ*)eB A1*'Jd[ 1C:kϢ;4IT42+ioJ@vƣp/GT"Z6Pª(jpP}YVƃ!W[u#h|66`gvw0\߁gtwIQ$7Tƕ` .;PhFIQi1m\&M*!bjЭ5qԦ$ sa˭`%)[r+-ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"W[>$՘>t# HW/FV?܊zDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭ^ ]P<$Ñ[\#˭`e [r+=2>[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%rKWh<[ # FEn55VJ+rRne >ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"ȭDn%r+[JV"Qn~!~!,r6VV4 mWQ-|]ZWkqâ z! Xu<u̍`a]m_: VþcugCoo][oG+_v~Hck}fe[>~gQ"i]SUUuU5 dU"݁Zzw&*j TWR% +Nwzx ej9FǮ+AU"XdU1dUW/P]I:%cp3{S*NQVcWWJZu"UspqUz&6\|Knj+g)xspR3I ~A?vIjT&E$. L)}{y+Ɖ8%Td4K:5Ǯ VM5iK3y2*OF]%jѫDƭnӳo?`~#Wg8j>zv@hv~*z}8u7bĔ~jxۿ>/ʌ/2QV.L})+W`rɘ)Vr2abfN}\\xx)#G0UbxSq#a Sx=8KݱQ2eZcVu+2|Ov010v,8Dh6K3 ,%&X?tY?6Txլm^[Zx8am3띻 Wa}mR[,Į-U|}?Xfjg9f ^iٳ7fËs3v>I|=2L1U* X4Rzì=ホ@- κ?\91 =09ݧR_8:60^,@\y@璝OД#6V] X?z [Ryػ퇅숒郷fr3< >fqo Ks]}|6sy͔sVqQw$ù?rZ%V|ÔSRNJIVt^ a2(')A_*t{4+j᭒m+MWb"*< - &`车AGF\{oλ{.aހ4`x sZdЌd3k,PV8dnX5V~6&Fyǹ#ZlNRZFy`@ `hPZ[n5QLA`9Ѳ:&vCHSZ;,mJ7üY'Y} xF4vߢ}>4 zlρUdЧo 10G@K{A "|[2C: ԬC(ʡT=ϳZmmk(l@u[EK3G4uB+ҭF!1Q< 8XAۍ"&j<5#g3B,~N\hs]L6fgUP_Nn6 z~i|om,cV oQ º:Q8r*[m ` 㝥y`nnTA ng}r<'1_g'+_ `MΰPg3gC׫;Z" hbʧCyc-xb+VkMQƮh-Y!,?}7`9R cC@ ]W]GOX%mMZ&ldZuπ ńuYz}|C6-^׮_͢ChAˬ]z7yE -C2>]ԙyKc3Ϫ WR~Nf M9f?Yi_M\vs\sv(L˭fO( ɤ&r>tDjB=JtTLEv/s<?T鰒yr=p+޿|R>SqG6yOunںE_[[tQ[_a/MvPJ.:N4V~5i[:/K PKPo<Sa%ﺕ.dGQG''Tnl7^r~1ߥ o]vvN٧׵i>n{T1,?oe3)8^fArQ2!7iodY@;n?2 т϶)JGϓse[\!`y8\[  V+,sP+xT^Ml;i+DԣԦasn&p˴= M$U]*rǤ\୦-qb]_.gi:'xĽ!2~q<)ONU LR*ގ]]ʁbMhsr^[|g]nWSc0)81 SAY`k#֞9,^>yI`^2׻@K uZyn`RHF( lK`@FIqᇋ;)zr>)|O\gQBh2#ov׽)ז.8&E>1'ʑ4{H_7 !jYdw > Lvw7]|>fߝOR;*AGNnԚ`dFӉFH2m+vG3,<1蔗Fk:(;;\׫o]a.ݯa $8H$i'`]Ќ4547*мYW ͸)׌P|$k Z:~}=?wsvzO:t9XHAf;~ ļ_=WuQPU+0 Q!vu~741Q/kmhqx/PcIJ`\H(ɵE6D*j.$'ʷ{:/+1ys0 #72UԂV$2>z•`qrx^/ov99xb&r=x×fR4W1LaSK$,%UTj/jy]"JJۻD^]"Dq~B.n0 3ɘ6[3T W#(F4WJB @Sc'RZpx *4m6ǡa94X¤ctmc}J!,E N nFm64HC azjI{'4ZKolH R)ڱdka?CFsJ\i.P!9"I)H F b(ЀLQ BZo5 &UaVz1VDdo YW |h\r[b,#e4@ D4VIru/}7)r`0lsncRH}k}a[;"ȾE~jY 5;X jO/szA w_7<_ Q e_Ob@* L2`{33Ť#߹˕;9.z*ۓg|Ƽr I\DxT 9–h-<()@G4C؁F: lp,\Y@S`inXe 4Fr1Ui$'pM T215|+"VS76.3_b<h6m-:::"T)$cд[p?3Ș8KJNb =]zLǠ숎AY{ ZLK9&8.8,)`<Tq+9.״٣i޵mq_1/ AڙISS,oM>k]ofB#(~_c XBŬX:6,WPl?yy4[v1A5ѰPr<Vϝ湷rσW?uSjqc qAtjaR yڪ`g1Đy3mU2!Pc,z*%6> i9N#+? CRS٣ zE#Y6A-$qJ01jYVg PӯmDٕAC'c%j w6w&QSLrjMZrY5O% i6&)zj4'BtBU UtE{2Yc pT#4x')}|HUVAz H{D>U (@ $);"wDlmcPߴ=)LHaݍ>=Uz Ӻ'#܄GFN]*-1="}9YrwL|ԯA?mG`g{4%%KpU( k+\xR\CŦV \/XEtK8Ap/ NLK)@`le-}NIn#trK-װWe4#=Oz); *Mb s..kTJE&|wG-@ Ʈ{` "CwCn?5E.>mt~?u#~mǓO_> _/>!^c,}Ils=jwpʣ-I_}˔)Nyn{,s\_>YΤkSy/'> ¯T^\h:.czM-,].rs)H o}gÒk#HRRmO'v_ˢ+lƶ<\/[+OI7}O^ckz(pԚ2*<@;UڹR8fStz֏iA4` 1RgNʈY9宫Bhu7,tMɑ&qMW_v${/ T1T>">2qɹ# 3q ] `͕1̧앑gBQ&dvc{>=zK#%vMKPi9Vn 5-љWiq,XV,|Q,[lWƮ닻=˧_r3z7Kl.>oζW8IK Ȝe8n"@v`USȩ$-.%gF"v,݆U XRaf(8b n+P]ZwƲ&#<|q˓)eǢ6NFm\Q{eo-W !$ӈeL бstsq.FEGzѵTaO=Xlp̝ohsC FP\R{RgdiXgBN[xK3mODZX&#bYqE,-$1BRgׄLrE nyH)-?9 w.#XnDocL:>p@-{X&cE2&#< s^4+9'.EZqqG"2n>iԨt8FzS DٓR,9t>qjλ_{*;qn+ub񫶣))h.+1PǶ0V#aoH\۱ܔފX-ᩋ*P*zKkӳ"™l.&,7}iErufe¼B`RB5}t]R;]}t>[}1U.vz&۠7'v/~%h(]3Wɕ@@/?O0C;#r#/P)/T/ծpQjB׆t]o~Lϵe:`٭77FەirIәdL_MZ~p2`N7RaX( /oWJN8tBkiz"rbNyt)xGJ>1R,{=CJxHcR; $8쩔b dF?`F{0k9OWoB/_!;Ѩnq^,ѧ*<̬J1_bifFl{>I 7&@P dⅽϋr"W5iʪ/K 9"P8hACƊ4@9]H Z] =BFc^YjHw][+%/v;)h1'i9`vL{bs&_)I?w<[ Q 5 9cGȥJI$ (Y i9v~vz1C'SS٣ zE#Y6A-$qJ01jYVg PӯmDٕAC'c%j w6w&QSLrjMZrY5O% 5&)zj4'BtBU UtE{2Yc plէNe48r:H21w?uOGj T%7.TZ%\c{ܑҷf!zd}kMOXnKMKK[ʼn~;"1Pn b*6Z7#0[ZMԅ**OZ܋uا^lmҍ,R,.h3+y Cr\]Mas>[ \[LT$Z\\֨D;)h3'Q 9E:x\[; MaL{^bs{?נ{9/_]?Q7~~L_)3q~ĵ'P;\ 'yXrbUWjN*~t =տxփe{PCBwu˲їy:V[˩ O)R4S%+JnoʣwjGU;O>>R#,[Ij*v_]MYbV@<RpgsNR/qoj/ CL2+2r8xk¶.Z6:3ivro|{/;=gTvnz恵]=n}'b{ka߸c*\{@ErM.\0ʘSvp`3(쪝ٍ\-[itV4-CŦXOPld&>R^ YƱX&caZEnE]}{ԯ/,~ͤo,u9^sĖ'!/) squفV}O!(ٛSb #À͘Q2p@=We򙵜/^ry2UXɨ+j Qr]Ys#7+ nGȎY;b2c^сS$ׯDuTB~e"BL!/rl%E'IIґdIF0+m\4`!餹Ɇ@#J6^Èؘ8~aQ6:[%ņZJqŭ%՝"6Gf(5Ǩ%q嵰.*8q]&8 9گ27\{u:K]o&gV Vi[ArRJsFTI񼵓ȏb^ 3ka# em9 AȘpGWW)eYnD#B,uL,Xb`rH ()rRus唻 M3Ͱ'/X?OtJ-~ӸG+p_=!NnSzlQUJrf 9R [diaa`$"g8lۈ. qop8zɎ;"{0hȑ?boD᷽s,wBygJ+ɮV( Eʾ«,eprVݴr#4`4Vg aQH1_yE2u`<%arA 7 HYW3 "-$O ")0-WAa6mjC6Niī)cΪ)=o?϶4v=#yj>@ ,b ё6>i4HKbH0 ݕ Y = D&Bq(#Q>2`HVke$FYk&\6C'/<:a(C DR\Ѡ3۲e 3w"/'r2VNwD <]mIyY,G O& %(d bR@ōqwX"q$8$Y:>qm6'ǍB-)U;g˳^O..fySUwկ>q/`6gyRsIE?~+6q*ق%*Vx[CPB9B9)SX49Z=6FK,@$)o0X/L@J\8^g}drFyY~{9M_HkDºBfvG +x-VѦCKWKĠ!yZWԺi][oxUh?ͺCjYQw@{rz(n\y5Wɜ|G Vjͺ8r˦dszC~,T7<~Ww?ʵǴ1|ŧ62w4oKXRyԵEtZ#CVOZ?嚱Nb4"(?0FoAjvKPa(3G? z=+.;؆㷮jC#`deð_, {{{v%/&KU5,כV`?@o䶂)oӔxX8#PxqZZpfJ4k19߽fޣby<żYAYzxf :qg-4L֫ SL.dRC[B6dmӋFw/9=ʵ n{{\^K g2buEOtz`Zpa>ַw$lpn؁g\[ ,"{mrG3jϘ[dLdX}3nŃNTiX!`'=z,Zrfi@;Sކ' F_^>Uv8=(Q.&1)5 6kdY1F"sf+RF.+0d򉇉6}%emܨa`3Ne]rCUh6Z\S(IwST#+x} ٧lu^X2~&nR,D@@3 %W-Hoeo[eoZeo[t޶9"аSܘ8{9]i|c.f$PI'?%6]zb˥M]/7謺Bp.Ljf S a3Y!Mz!`c\+yg):}PմTe7_ǁqN%sc`dZù!H!iлV[!C^P1O4gyvpleg; *JQ@$$/\ńe൧S_ǡ]]`Ӵ"Y&odR"qh(I6HK&8iXx]t\)SYEzL!D1LCJ/Kek.BvSd3Bg櫢FipW?E$?RsCap!ağq Hi .>1$ +/BޅQQ ߖt!Lk~#ē >Ti댧%L=-8ᖜC&aT䒝-%M܏Lg'iVVVy)K W&0c"Wō_ zf|49?/\Lq.] $DOOj$zzkQ%  /ZѐRWd+Ȅk*U0niz0o^v:L7?_p5;[|njb%̹2ll%^?۵ -WǧxpGi=0vH(Q#MÈMSl07RĘ5F,OXAOg'lU.'i s%VNcZHX$_yP6ސ~ .;wp:UtiM1l8=3Zݏ?ToﹰY@d,¿&}ww?4CK 3mrƽ>vhae[kW_~P.iMڲ7|m5թںq寐ؤ=t]RTpk!G6%Tm պG&'æv?)ȍ[ɅFa;+ ve6JJ דC{?Kȫj֦Z~bƜ3Y%e j)9EDf^ékdszid_U'u[\2N3#]g_}00yKXПP]r_NzMRfWPWHNSU|p9`fnBKhWY$QdB HJ('dn]eXx8 +ҕ7>XD;ѐǚhimhHJdh^GS v;W%YˮUVbN@=\=_=0˔=>\=H` tp;\=H\dp0i8\=LJP+x\AW}ܒi:W$0J*vZkpzJ Į$3pU*jv*R^#\IxvU*ֳ"=\BR:W$0"s&m"%W)xVuz*\ښ"0]"bm"=\B2 %*8$d+pUpUW`V_2/*}"4Łubv Tg`ĕ%iAAaTgk٩ :WE\ٙ|"¶UprJ ?6zՃq#Eȗkۘwȇ^khx[R$9pWe[+ʖ ޥY.gڙM\݋JqlGJjWOI)kWESsaO߮.NQ%R1JHԖUtJ/kI^!SQ%gst,]ΆR^GH]uQ[$Q4>ҊF|H=d 6^k!\x?ۊu_т?i>*gEhxz>.^n ZeaOCۆz"&rR*8}bPѣf/%^5ʟ.)!Ki)Z>G¹kK:yM-y o( sq5ؿz i6<.F5X,ٸVz۞^q0^~oY:EWi Zd2DK J1#RQcLۓH4@‘lbV^9nB셂$s0P]^c+ ׌^0nPc) W\(Q"Y8ʑ"h-,r.u[](\Q8"JsY2k)rYk F&2ZGg]$tuMBQΥLK1VUܻ+wL~}wLXjQiY7mkO+b^rrK`cرJ*SNW-NIvyH(측^IqbrI6*Ƞ6JH_ jkO[L`dbqΠC̎ I66 K@2AfuуR,f}B߉G#4D.%T O@TXc̼TGͼMl˕ڹrUgi0+ѿ, ^jLp1<vp1*fb x04M$I'xPfM9YT S?NgS+rGS:}&+(kϴh=hYtq:8M1( ߟ6s+ܮHh~.SOh5giJ2q$'\q$6t6 oF.Δ7X#42fbǣDϮ'Yu㨜mum<+Fd G#~ZHXxZ6ʃb >!+-9F&HSMk_MgQs{)-ǿ˛w?7߿?~?yܾ{=:',m$`05O}hɻ[-ehS/&|qm2m1J"-¡V{k@Bzpc|z|>)NG9l5I+U(+D6*S. 8 z%<"9֥[x7j]ت#]Gғధ_C1u!y̙iki>Clv6 7B{= KW1Hy\HPA7Y%eAi)G"\٠HTt۱@W:bq_[Cgk⚭liX?Ҡxy^+jRZvn&)Kصstg H&N_Of$A7S)V¹ fv=T]]GLVy-T&/4LJI ZY .ϰxuyٯ7þ b0Կbo~Dd83 lN;jQ^**+gȂtV=V[q䬅DQ)"!Lp<$pL+}ֹP>䳓ϛR!>f2'w`rFGZhOg`6o=yÙ}s ,+tz|<8g nH}&]b. ,2V A:SIA G"NĈiVh?"k͸φ;(ީMc9;jl":p%g4pE\}P8I\C,ћ].ugSgȋm]'O)]ۙȭ!՞ ;{IdPZ_ZWMtfwtjsG.a>uԲbU;_ߡ絖CQ[nԹz]Q-gsyL8{#jU}Msu'ͅ>4DϧZr!WRPW)T:XKzsS݅VnXDTpXH][x%Ι1_aQaAMC Y䤫j-gz("0J/[C%ϣlZ0-y!R#{C1YY~V% ]{naaX;<lfxtK>+bvFfR)b tR^"D"X$t/@2] ΀t,h.%> uY&@?x; ~/~,IqDyftj,jA->l,xxg+y$9  SBc s<["0GXy)"mr/y<482K/Md(HcC49)2(r< 2"֡H%@0Z= Ded8ybZDà) at[٣ "f4_qQV;:h]G(Z \)NdGx !^xvtc;qEd9]) I|7 ^eK rƓɌByL/ƻ7I|%1QPRu0USzzyRsĻ9,3褣>3VY3d`KN>@^x#e"sgXT \X' w>dBƴ.b$o2֞˞֢UH"(t69!pw!J/U ,uqNQ1팜]k8'K<7/ޮr;mRlA?:|S; j!bzDx([9I PJrBo.Gx<|3)/竚[h7l6 29&܊Ǘ*Pit%u-+KpȒ%!'SoM(^{a3L`@ sV{'#Ep@F31U([Dx`PۺD\¨Rm"lp9 Od jT:g܍ ="0 zF0-+!VZq絯Ԍ/ؖspy!ClɔXn/7wT6UG;šHF$&YJFP2k/JZdn@T=vƹG|vbXVVٻ8r$W ~la3`Fcb@I[rI*v/O0$KCrmb13#G0D"YmTJ um2!r'QXfQvJ+B gɫ: (T>;B["UI>g筍Yp~5[Ξz/@V[AQY]Gj}CBBp5C0 %E!tVCK?@S9'ONZ:iD. NP:Ze(F_dRŨhJA-4w@kHdi+0 F#BMc]%O&JdK3&#n{[ |:ލ٦qBXR@%+5E Jxcvh2HCzzv.LD|RU>JUC=HL*%Cp"YO.%&PEl>HlJ N=`lJ,b[Icq z=+Na<뇬bAhlDY$ 2' s (׾Ƚzr7 1[AЁE)m.FkP,85jP!sײ $kY+I0Ph5 &ILN;-a`pP; d)IQR 8O(N|o/Y?v=eN}ժܷhlow?͟_O'}_0ۿǯh dXˈqz^t꟏AtMxGzfЇn}15j܂arA_ʪ-R@G+f{ =T&}#@AIKqx3߅,I`%Wm9Ņ(PϚ$Ʃ?bRy2?Oe4iɽwnzg?ϓ ѿu|( 䮜X+sn?Ge̴t3>׋^uӾxP;߭mۺ D *' 8TY&ݻk5[2xd%H`MDXFE]zj`=_t ق33iC)g1@@u)N" k3ZC?|0_~_yޑ-6y: ZI2h1( QxZЎ[{]oC!)(IiUͦ,D )L0̀P.QVcl9-v[|7ZmjjvkEs5qgCd\⍤YYSZa@uXE6=,ƺ@!32Š2֤,Ȫ8!b`2 6[n{ؓ?L'bc-ol`M3BJڔIR1JmS(WgLElVD>۝Մ~븗BX̍-Y\]T;S$f+Ξ@5uVm/uNfRXظQ.vqkw Y4IKB]Ȅ>&mMh%&=C!MʇdbO}+۲ِ:irCӨy+{]:W'- и+cO^Y瓠qEwXH{1@ALeق̙w硼~hw3Sl>r]*tV ƠZe4CRPuٝx>29% B)T3kzgq._vk\jӫPbrN+heMP(]jJ*drv#,&7>ѭB,tj6΋ҳB"m>a`k#+BP>IU-$EtO)Ay(RцLCv4z+Aw<ہߜ߀qSW*=.0ļoj3mǻ U' DXSX Y Քb5Id )tDNk˵9J G:QK(#FCq}+tN4/iw3̫ -f|?`)QL= TG>R`ų*Uks*UWܻJUJ]$׾.n|oxM/Oiր)bu}u $PV'Y PC ԙBI3T`ָYtVPBJ*5ymFϴ)U6j0䢥)C Ъ*IX>"1m=Nc/__ǟr_v݆ǔp=sˏLf&CV(Jt,ȈP&MZpeVX`BiJ1 )-b!k>c XbxR SʐՁh]rvk8r^62~;lkcܖ0X$.o#D0խӇSdNJ黐M )y}G:&2Yɇb00Ejj-홣722E)̂L1"3Y*\TtH>*G(B}ݧ>I);~S:'#x Ehjv~[;fg_l}.۠fc0Jg]*ꕲBdgB^J+V #l6/E8Es2 v0 ?ah&Y$: (,B6 A:6R&C{DIL,;# yٕW͌<|NlWTݕ/8N5< |ZK 9sN*I/+Bf0 vg0b?Y,Bl9kbVvtQjVbRX`MJ2E@FvVi sv>;΃T=yi@Mާ# V2 IY94adRNa =Ay*\xRN!0rOq~fFZ).RVʫ$b!DCmh0DWRXCXoC*z/E}{v8͉ /ī/:ݿ7,^7*ZTCp%PX4+fMJ_T5\1c5ޗF|QO|7͋b2M=(δ07IIݻ '~%<1$-rQ&髛qy^"o:gEu'jl-w7H%5mM6A>ƉR`blAi=*IyjT#Ƒ9HF88ݳu5{ @;PfrJ=9euJ Ru)FhfG +L^/Jr fJ;j#rםRUxQbAȏ&ˢ>ܞvq8qezą{~;8Ju'=~ej=+tv~5?c5}EJ := 6ۉR Z%0lz\~Ϸrߋ+QD0"gHr2ĒPZQ=2H*>{^(Q ޣ7' H OHRC:j tE[S)[UzI`_@xAϫ<@= ZoS fτWUd+hal%ߐn+i?n+Za;pg;LyPAhRC\B`pELpvF[((,PQHTM9D J;uRl&ɈZ.@Ad3썜 ˮMo$TJذIᙟ]cWѪef@DltVܬ;A(+F%)}KГO.!d-{^p7x/4xr1Ǘ1:׎<$3v^:8LcDPdA LK{AܗcP҄g gf7֍v%(yS, *![cN$2E&gN6qsY ,?~qziA"1),$ ÀXC%`M|҅q]`( +H,V;F$Yr0X%Bd +9]""zrnb4fj]AFgi~1HP(]O< 8XZltvٞ HΉ5?""~0L?G|Sc?^5,[ژKa-y#:GEᧆ&-B=qi=6PggeXx|G* tJ`:c%xz@'Y-{+(j欂!A=&R$l2 ZxBxA'Gh2؏Y̔Gq46B]>V,fg~&z:*( FKhp^<8^8fBP>n=Ӣh޽cymbv&8O61񤜌s0M~|r8NV{BZ#sHڬk5FtsY> f:gbgDOg}UG]>dרʀQuZ#:a9[_LYQfCM-7O#>fv_w߾߿opwV<3B6 ]$(3 ?F߁{Cϻ6Z˾C+7ԋ d\#w{_|HC;ƕmmH.~fOf^-Urm5?TA$+L6$t[TUV`x[ClᆱP_{J?#F_'Ӯ28 |Ha3!+ ѫ#Tj 7C?*i_tX*=Ov7L _2^w*IDǍONAd~fXe]g{-ow/g&#kC׬5jHyZIݹ4S2Y茱6&&'U F!Đ϶{mdZTwF9IH2%C-%k'@mbrIڐQ(T)(JȱN9,mR!AL0Ε`G-)jo}U#[~״rWջՋ Y"w/4Ƨ6+!`cѬx\4uiBZxlF{SӑC53':˓[n`'F٣<_󨷲I`R"*&5 JdhU~';BQoC«Bpe ̠xЊ|T+}b# Zʜ,P7OqFh7!R[$p2 JL>dș jEh&A`M3(^^m'^Fnpob82fr*9&I I^I"fNjGB1"´HJ~ӽBl)| U ETcMKQf$aI 0L)¸A<k^l+=^%D_Lax:;.MۥYty EZ QP C3%xRqYMNh&b/5ƳE Qd'zћ .FdRh42ɑRrwI>%ͷ>n].dzŤ5 vx3Nm\?Af.ui׳]q/A*9g6u @*>8t)`tQ wN|oA!PnZe}=N<8ɭO4]d4~^M⤃y[߽kֻ^mirv9^w%5`Cjs{dz[ܮz ˞[-~mҭ\ 48gGv3x ?@zV(wcc_Pℛ_51ϡ+8/X`tzǹBvRSm{~UK{EŃO[Ȏ(YS>yD9iP)cL)+bI1L51rWg/YI.h]Oo]V3i>‘*` DSI?zHR~7jႬ(e(VW%N,7N*`9 }yQ,׀ql ϲEo|岅ybmU{H+;ۙ<9k+8t.$pr;Uld7[uV ']E9LE YL]f堝U$| [sGUd'$jzv4޸\BMKQ`U0:VR}k٠(Ytao.4=B3'Յ+.j\#rF _^MqQ`;4dYYķU%?/ZQ^I`o&m?p[c`~XsmR #ŖfbO˫+~UWVo&T*2EtJa! On۶Xh@ .m=Ӻ](P3hX`} =*TiqOUJ-G;hB3*gW,gO^5tRn,#\;\ L0{rzFsZzIiİ<WNdfp6pU<Ң:\U);+:_?W?j][;unc}1vW0΢0JN|?{l`A>y8Z^nL *SJ>Dy'mx jwH墻Xڜ:?~?|5YxNuz̙f&,W4^0#̪m[E?_&ɋuQ˫P? ƶioc՜_zÄU]=ݧO_6+[Y}~zg=o;}ӯFn4)a{/fˌ7͋7_Ǟb2_\O4fl^mceX!~ --m>L SgcR_w5+]/.Ӳo$ptB-C5pxNKu$e&\(tɅ.4Y%aLM:$5i^e.6yPF.5'YuD:j^!#xoRHY8 _ £V1>VЉy:! jҦ95vW\jbmos+E~E]'sn3h8&=ldl vlΎv])Z3M&ʮ.(ƂW"ϥ9,9j#6?GPH01㐦Jd Q1QV')1R@Vk#WcfDܤi_ח’oph+}+˛^~R.w?KPŘ+;Y1k): ΢XLPi8hޯgBT5ơMd4Fi F:)'a Lf9I2΂(AYP`$Y (Y2)Aю8"댉Dx\lgz7`f/ST5liȎuO$]Q)!,yddIひNHL-hMMkOTV[Y YTT0"R,A_sqEC4r4iZͨV~'}}P)B>j ރ M&̙Q%IEK:JH3GXvn{v 8~TP"ZB>ǒCtT')S^S |& P@Ig;4a>,Lf*Ihk|.Da" C@"Yᇊ G}(5|NwDcKDJO$jm@lr`+6gSq>9eUSySVT~)sʬeylfՔ_nyQfG%s (rTErku:3:3FmiBʢ'P AVJɢ4;ިtDM*Wm(>VY&ެȞkRL؁¡1pdf= o[͝u-o8ϗSj`C>:c)zAɱg()d3ZH-IíPd-9EfEIvIN\|J`)iTRc( R8;>RN4c_,ԍPXxT,C)ܒLX7O˴凿@M-#vrmȤh("(E4QEN{%d'e^SVNPA0K6`4-l { )`4b,o8%imvD>Hʲ58-S=,86t6R ^6RQkER#{ k "%E&HLsY3Ȇuy{/.3U` "6ӏ}5FD7"∈i~l,o~z'zh#S<['O?ֲWW3 xߺzD]JEw,tl2TҒ<~*sXBnT!rE{/0I:FkhhPdƔ$;Ծ8\kDmI@Yc JSD@$wα1ZBhJϖ{=b4ꀲ(uK D|.$Sg+{1RQLtМb?κPrVaFEaL~UJѐd쐲"4)x8{$X{D tS,"{[Wo&wF.6,`stUf\U.!@hBQz`o PpI…PoVdM]J]]%?˴RֵBKt7nnbQ&化J-c /TX_ٻ6r$4egp!d7 >mMd#gq[jI[=;"Y,X"K9KNH4 GrܞgNVX{_WrμUa}1n6 mm&jj{4[tm { mk66y HGAփuT9q- Auځ^WÌyMvKKF!1Q< 8X@A+"ĭ&}ZtTD$ Z\Ҝ ,#XK43,s&p0rJ=k(l39Cz{H@H7)2$ְ/Su59w%se2ܱaH@),'DD%1"2N&5!eIiF+ߊZJMDRIq^A8!Ppvhe(­iD$n\7! (`lxSbS3V4)9$s+}s}wǽhe IA;pIY-tK/&'[&8 ZkxO+2~}oXI@ܮ<Ѽ,#&W/7ŗ(OҎq"!; O F{LA*-*`Lp7)B882>HB/Yd62 1#GLI@_X9{M+z]̇a4y(NxN(JG^7E ej:\KAt)NH a3 n_sLH:Y$ir4 b*۫\[ʥ*<\ꀂbq9*M%&ZG| i2 1 Qi(RNq,z̝vX-"ro0C 2"\/hl6F:n\G76ןFuz`eTe&˿҆h.)̓ rM1YdIv 9%U4Ub[S&!14qmf' 6Q g uqze" 0sv!ɆcO m$nxZm%0$m 4»2SkAYU ,g\\s-r#$rX8fnևa6o>Fַ-熨Bz3soO*L5b(׆+DD d fh!KG w0%mO,T@;R"%1dNF03,*͸Jb,aE ! Lq9 J&vz}/ֵ'/%ǼT< =BS`7qR3ۀ|Ѐ2+,3Z o{~[: I!-@HE NK I6K`l,\]Hx1J@߁x=sg c% L\4JF#a'QA^I}hnv ~לSL,+pfǝ/M DٸY%2_0 0 Ibu(h 01.xlCSMqawȃdyhQSzN2LC3v(°NobNpmt8!x}:M*aYiS`M@ZX.*,&W#E[(N{BoX|} j&녿mvc ARɧBG9ZJ)L[49Q8WZfb>a-OT9L~*ӹ+|,To/^GWՃyp%&SbN/v;WR=9 Zq+ӛn7^2&%t] k!jg} #F1^v2YΨs ԗV Zꤓu#|0xI s`.U=Yrx+> Tʉ^R+_gssp7WzۋWo>\`.W F`T֑@ٓIi'~ӌ447i*NӼ^WN.״pH6HBqӽX@~~};Cn' w35KǠcLg~ u-'WUQP{Tht&ܵ/D &'=RDgtM.&^HS49U9XԖ^S\Mmy@5AԦB\RHPhډHV(OJf4zpZts^L DLKϯĉE V6(J" )C6z<7#etDsQ+J'vvi߽fG{\P|gKCܯ ƅ^`Jxot> HIa4h/'jl2hJ90'#߭ fr4x#^gTOInuV$ }&(X'#@}(׎B́r 2D#G6 'g&1n,̺䁅QN[ZƯ)X 0O#J@M*Qϸ 1HS:S3rڸ z F T[!6}UJz٧}.@ZL Z@LWRnZg\󯦠+ZxQ2zAL}Ao`V3ڹy!,,*~M%_@a" +*51JޥJa[oc>!QExj˃ ""I`^{h F$qOTBK$cjzyADy05u1RHB#x͂Πuʹ*D:p)XDFqg䴌k){])B[ |UpSqԟؒf3?V{kY]3] ʝ+b43f5h%RZA*iW709n8_M@\8ZE 6L I*H hJ5JDt(Qq%@ uAX2T6x(T< w 62)*r@\I\b^K/؛ hI?=,nxkj#HXk! A`EHs(%T!/ਔ$E[AzoFYP!C`τR[(C ʍN!$&3Q5L5ls89_ ' S-yfoW|ONl0T/#j>FY/ʪуT?Gw'OY$;vԠ1ϐQxE: ǧZqN94tJptmQ?k5D'XOs]$Vyhדksr .ޕw6w⸬Mz;_M\Vux3p"W WS_^Ī*Z[)ɳU+Ԥ4Vݾ 3Bʤ /뷃9խU|cTO~._Zgw1JY6'f䮇t><=֖۵]8 AVzՙ<r[ĭ)^7h0l0HqEfyxR(\y'oގ9 g''|8*#Gm&i sE(:D]Tu**0R܅_{/qKHh$[oNJ>PSc}tQFK8Dc$Tb>hPNCRz>ҡCCscɤM.)c mKI.:ik* k$@=H{V50ꌜѪ:$d6Niv̷{t j6{Dհė>N.FqF %~9q), -٠A Ze+q>~G|1r3wx#q;^ʆ1ΗKqy7y7+g;T5{1ƃ;9:9ZGux) s;11 9/|.p|U2VU2NGr_h__-SVǃOr(0b_z Gߟ/@*UF DgoR4Ǵ 2>8o g7Ihuo }ofXe]"x]iZֶZ#i ;'шv/ i|Ndt:йv2 ) Ѹ+!l-iZĘLȣF$ Z8gNq41ij"B;ǍXb#& !߃QJOuy;;#E.Uya,`P|Źm(5{{7çQz"sͱܟr7Vh=GGsN0)!!]:B֊;6!;IB %[JXUiwHnOhR*gJ6_AcVY#v1P)q"%:znl( 2_z"Ie63k7{T&95HӦ=z8 I T $S^f- :,3`(UN+1GETޖPʜu6mۑE2LʛԞ3YC [xmzYsA"/k/%R' ̎ Hrܘiɽ̻x-K"'hP>JI'sd0,5":E#%dZ Gr(qDE6[iX %Gj(_8}j7F7F(4Cj.9}aƒI'4XpT>O*+M" C,e0N8 4D' G "MEU|Jl xa:?\o2`8+2RLsFŁYy6zT?#RpFm˂Dyik4]JȞGY[!<K;4SYL'4VɢY,D:# țK+vX{죑LS7!OV><9ħ԰`MJXb, =$=;AYJ}dP?x@ _a19A_~3JU]tXl ӳ^>KvDŽbw|7\fG7pO,y s o|a#?ܗY3?v0.$hYJG_˭}{3[v6$X3/ysD\t6؈6Eyϝ/\V|dE|?sArQrc5O G ,_h9Ep>NF9ճ?VsaޏK Ͻ90=O+G㕩Y}].ÿgTS|bVs9z ݶ&O s'ZYf30ME8 JMgrO_.^Q%l 粟PIh %p) bTm0(3Y`jmY41[WzC|UsH}g;]Kgx`y^) h]ќ|Qqm$ aR sLxʈa"mr1^gաoFs=3?;,Ɵ30s2{Mi,N?rFhahߌ^3~΋)RgC=(Si;W'Y (ךY +J~Fo}j9K~/p#]GFUն_+;\m*<_c7/ocu5tv.VT:K6~G gm| wol{wjhy=ǦR~v?>A,AZ [9rE!g6^cR=:R!tLqIG|$ooMA KH!p)`s*S= NձˮRx2sJJM$r*khc:%* vx =:gm o}8yy2y^,bf>^_;xl,oz'zB#wY[jS0煉Ԏ"fI5(kqZڔ =;Nޗ ?6QN?qdQ4NScp@ :g(@,J91YíNfb>-"̬"=s_LtwPQ iW:a[+=ǿuZb]^G\, 2ĠHG2IeF1.@)E/ɪ9;4 IAFH=xxa)sffYc˾蓳x8\K97O6JaTQTAg;㴔C])SAF,yg $DͺTHgr7wERBr ruHbBhBdv :%*I% A423gه[YIg싅0 +>y-jͬ;S貸Op:{5ΨL9'}^ .&G7&pbYЄ+LΐEly;?A8˰ɥ *aGI L# *̹x֒ֈI}Q[Fmգv`o.\af?D 곚@p`#IJNE0 wxy aLGX"EGXI--[̜Itdbƶ b1uaD="xG礍@Q9E̻a>hU`\9D:;EAFSB̍N1T2θ༆(G=f p9w#_C\֋j\g1/ womCHE`^DS0j \XIMZcrHpi'x \lMRޱ'BN` l"t5cNZ ~RE .w!OT) ax9Ө^w?K᫷X# JJ_OF TFrS,ԎSWC v}-m:$ۮ$[$-ҡVAP]q7%:`z!mIq>mNWP8y.TA҇*Hf&8IaQ% zV˷Kǽ=aKmOc=D9N7P>e|:oh(T(})7qV ǝe9Y#kcC`S @9߃:Hou^}R[ܮWAIǔ۞ii7#dVEr7#P(ȱQ{R*jÔ>Y.hS)0p 3z3ɆIY1W"Wg?Tl~j9.#jJktK 8^Z1WW_qxukJ4VWsV}n_zQ1hrkI,_[׹u.}{_.~Oc$I`a,=c'bHy5$ "Zc41QTwsӪa@T/|& G \EɡT ,<{voEa8f|+8^4Q@,v*Z p1w&8+Q݌ot3{[;lz}iZSA^-Fd rs9?T߆$b/ƶ:Z_;Uf=owwK[%^hwࣙk)S|TםIJt%h{VRb4y7?`3pͅJfk UrmGWJqO)_k?~nG5h2g3FI֗hY. ϳxJ/U_wnA5v~V}u#gO6~l oyGb [[EFd[S3~xK!W/][sg*8d&Yv %k''k3ջr.ăi]XDwˊN?8;9^_֜Xj"mh2v2w>}NT~rbv/s9qe4T~VVP5}{﫧qDMC@wu<~_}__sWj3NZIV~4s;xȃ :4 d5(=]Q mL#0CpU6WZk8v8\e+WCp aWSZzB+e=\=B0 \es \UGWB0]bWh$Yr vV#\I!T; ;sW\ә+Vֳl=\=BRhAۻcJFhn߲{\vi :\Ve0䪇ӚiE;y_cn6272qڱSmP|S|^!?PP9Z5HGk$Zx-.{LZJ-_KEkEPHUS[Rԕ!1N;*8ꭣ!bzɑ_1 Y#x5/2Nb *uɲR]=]lS̈8<"%%dQ 7ɱV䠛V.9 ƚ_5D>!RN)]2 Z%k5&qEf쮨[u ѕEtWO_F6ԭ@ݸ+_Zk͇ 7Y^~-9]9S}.G*7tDc/ XSv8V;h-oFdIBzL@Q'Py: $w ^r QrdQd"J1;`FᴎM92ɠ3ET΢(Q; B2#dN*(O-*'rDZ9{Y/W16F!K@)4Xp.:ڠ("T>'BQ!Xu!8pY9Yj*^1"1J_lrŨd(j5Z8v`' &ϏT IX)Vلd Aif!˜Y X,<$kAq4QS}~q]g;=źޞ;5xn跹m'Axm.RBN%)XH@R" JCAHh<" ="L>u"|X'U[[{ȏ TR2F'RUPQf$aPI 0L)¸A<ډ-HG3yt@c qZ7V7i4Y?!bȷͧ۾hlhڿo04ִ?iM@ՂPMVK[LfBR YA c%)m.!$YxaCR !$e= QKAHe du6$Pj A(!GfN%!2 NF"`u@'ϷOG6d3)Ė,hmӵ"gl '$)ҽI,-I@IDJɺS*,3$e:3K$RIa7_P[Mu^pfɱLqleK_"J( /HE/(Rc1Y)@Eg )ɘ-LJ %) y٘J9$T!mdd]D7;X}Vmq #/e:3>vƼ@!҅} L[m--foLi˘.5&͛pmhbLG9.==OnşAA8鑌r. ުbL*[s-ժFE#j bKlyeD !,B0Yg,lc<&{YWH'tPzwwtsr㯣iL;JlC?yyin͛æ߰oolףGtϗ͂0Bcvn:::RUyW&XMeJ=6ڕ^'Zنj#VN)3(;=fS\[t!;à/uE@gC[{AA4MNo4zoϣٻ^Dy&j`7.'ZJ •e9ƃ*gɌ6{'Uj.6;Zyon$Q_w4?ԇm^^w;m7u~ry!~Ǻ(|Ƅ} UYbVM3_X|TI[nq3V<[FX[<p:?Ϊcz5%#9Im~o߶-Dّ%C,J^X狮>9H P8kbSȐei!i;]򀳞6r @\;9AuL^_ dݝ맂˶wV%AE_]shNF v]No@;!Ql%u$uC; Dm;Lʗ6W)07JaRyMQ$(/|2R1 H"rH2$[+=#o ݺË^d5s+ɧ38mQ۩lX4:_#u-j4"(#9F}pA`,% u?_B)m m299'bW} Th'Wxt,dv?JlLVqg'ne3Ujd ؂T95vdԵjmXkAkvkM$Ҏm5qg q7kgAgQh͊!h,#Z-"&=Ȱc"Xdh)=GQAL,ؘbڝ٭{fߌ2Ux*=QeI뛫/_2x TZAU0LbaXl"U*5IEc,"4 G~l-Sq^kWݭ$z./M[SW'iQV9 9(ku.A%eP*X|DSs >^IΜOλ%noKfL#0ZbW9-ͺ/Zɂ EtZA l\:V}! ̋JؿH:5HRMvCŤ{?7gV1iu֑(ls' >Ɵjû !Ðʩ.(,$(1巠c%ʛjF}ZbB`R"0= CeX6Jf@F{.EIHC,1o-P,A%P&L`JtB: 9]c;Q.--]-.OM?qp]! s= Ao\iҭjWu8Aβ|0§grLP]U ,"} *Bz=ԲD΁/nlxs!p 3(7Y G{ݍnw%E0HE[ہWA2pZf|y:IZG2j˙s{eVF=IBL/5o?wLn%o4BߑL? gԆٴcr86|ZO _R ؆/Z/?-CypRJE2AV.u4hCl١Ba8=Rcc $8s$#([[X''0XTBŜPMQ `eL΃#%)^HyӺ9;fwL؅]>7"64uױmۧLRMҍxtut7}z^tǤc;Wƣ}5 yd#ov~7'_+{̼2r;?\]moyu{oϼU4x.&1"e曓Vnz\>rWL뽉3yNwpH-x$WwV[ۜyjp7W}ygWtP uPK7A~?:}HcIb.b[>*{yesz߽a𤏪ձ7Ū.G]~yI&]e_r.o՛Wkjz;gcHA|SOSf"z~ƿ/??|sK#V/o~oWv3"*W7xEl+`DRu/?_\_@ΛB?兣߉74.ϯ/~;VYvOZ B^mU3'|NN'i/ 0_MIyqyjt6Gihk:)E5w=>ӰZNY3Ɓ`mր5 T;`qF‚*&TA0ǦNك)ygh)Q4e~-mwX<,l:Z7%хY?R6ĘYa J+ #,ˠ*//uGQ "%"$ ,ZiA$+0F2 @H6`G&go)S4SJ?M=(q3Efqs~fZJ Bx]o9W|]` 823,mEDIN,bwaY-+rKj6b=(H83"EG.F vh<=i=yj'QUk5D![ąG/qLy.PVe<=!HZ*}Qś; u&-rXaDp5{ S(9P+̾ӌ+E)G6 'g&1n,̺䁅 QN[Z˔d, qmrj 錧@%X&Zg\g}N˩re9۸ r N/.gPmyn1ooC>W ]6,?e\iQ&Y񙁘>^CWgǢJ{_J%y0BH VNiԚ%kt7ۉVvYh*bw[S nM:XEp`xTBK$cjzyQqDykK2FAǹ*D5)XD(5r K#n|\r2ɔ-w5|}e>\?Ć,]ώ^ˍn:.쳔;G Wh"g$J3k0xlNkUҮZKk/ ҬlUxXqh֒'0-@'m(*SX ԳT#GDPweqʏH7LbMC3VQI0 DF&EeZYw # тKkWo{?qvmx $! ,@IxNŽ*\&Ƌ.8*%I:Fjv9ֈ# *pRjeuD)Ąq&jIq5k 6dirEmўBeb`] --[rrtMLqQjۏw!70t<&W656J~En'D8 eILrPqB *(C(%99ӊsM&O&S'q`Q$:++!ѻHR&'k>/óyaweB =ϥC]RԳ3bDϟ/kќjC_hRob?^^q[Hxʕ#ލg[ټK3՝南W«epvesb.ܗ0 gr% .WӲs7O _B֎deq$!׏t4 kFIY^,oCJ3g1z6ѓŘ/2yC6Q'ْg<|~!w}5Az|U[r ;nTgYXA___=e~뗸g`7&7 ߏsn Wn?`m ͆Fl3l f\#7{}XHR|o-w_ެ3#,vE^}F6n'=(i*.\*?9b݀@][|X:M4Mzy'%fੱ>:H( QgxF\1ijT* 1p=*Tӓop䞯dt #wnHl*3(MҀx][YD%WxKT8%޶$PWv:c|kkbG6Ɏ1N99z4xf`oy 裈fv/.P+ rte1\iTQ,M|@1 4kթy\Dyr~tE%G\ x@, *% a.if!H`r9!kGvURQ%rӍ)l;u3t ׵fI> ÏBxE' 5V&8p.$ÉAD,ř-^nRKKQp&Z 8|>`&9$(2Fg=>F9hNsG6T*7fP25˺<+KGC~S&+A³gU:ų7x嵛J8}+R7#d\?܏Z+KѴTDC&d18E()S#eyBgFz?y={Wᅟ,^rGWFC7u涕+jY -ޛcnl⹞To(}{U; 5ۻ,owWqޫ7?*Q}U%%{=l{1&szػ]f~=_=ZTA ݿ/eﯿ?^MrF+ˁ U:qq y3TkS,S_TᅁB1&T 1A[Zg>y6nI gpHa,sG#L:-ihΜSIyiVwZy#q?FmTD}p*ٝjN0&yjXJwћ3YU3 b׷{~bv3]U 7վleW oZd/-0{m\otkh΢Ͽ~4(qv"7#_QͽVi]la eCߝ qq V.pi%XFo6s:."o.`BN经s-ߗb~BkDߞ.s=M%R7R c:XxkC$Bd)u z<ZDPx{QBy.oqGL[p6Nr)gMz7~> d%#ڼ cñE8[>8tvF{#f!BZVZ3B>L|4GZm5St~GGZ=*W rʠˡm$^歼H5@ ˴/S4i]\AWpqIZKQGζȴDf\2&q+<񔘍c2.EћӈH#FIUZ;~va]Xkڅva]XkڅvaffKQiW}Z*.>~3fE )PH:׵}i(CҾh$",d$uIS^[0ieޢ7U'hu A!wVz=S-Rsd0_* 8+ww,ud9nߢù?&דi(2w+fޅzt9-YkDn$m2,U\ 6w{`&'H1^sNSl]`sj{Nm6k2&:b%#Nx `Mj*,Re5KJ~`F_|Q>c$~ٔ_ "y8<:ݭﭔn{z3E\YaXbRR#hehʽR0t>NiyRHTV@ jdO;g4N4N<5ɓhvޏk5D!}fi]񻩙i*Či1HtOH⭂u#pwpgbr!h~I/[;&hH "ӗ7kqؿl 8 WF+/nk.G 5W/ŗQGƉB H gq(sNC2 $r"B@kczX'l_?}Jν ߁}p42\Hh"8Ҩ6fa(A5S Erg''C/!5cL(:(ke(kF [ƽ_DQ-1::XT/T3*(K*)Wl= xrq5qGFN<3C26N\;Jr{e.QEWcց\ , rJ` ]-Kw.r1q Co2B|L*("0GtSB1+Cjl6g789=_@ksӉq6}*{wX^_їZݠ&Y՞_>^U+U.0%1͏ͬ5CA+iơPi wZt]B:mEN!鶢۾DۭsfDqBMFFQV+ʒAdnAeݬ8f"H턶D.KRYثT MBAZLƃ~հ8\2_j H#O2a@V|lp";2Sqbsvpd\A4\'$O7sa >&c@Mu\f[rM['谬"M YHa0C#^ C+HD3zi~Vz 5zkVj4U4MkM\࿧+I J+G4z0cmv]{tr:z{F=kڑk$׏t5F1:X#&2fbgDWcOSdt*ݣ.kԎg%^:.|\HX[O$x_ f/T M n6N`x~LOoxc?~rސA R=F{_ o|h%j M-Nm2 d\W;}ZI֠R $OoFJ4"n(-+9 Fi_!O[kt-UEAW NS+Ba@Wݏekb\;mh$N1q+ r#șgORdAB-w[,>HUw8,D^r}r:,žs27t4>5j-^HdD,EMӞ%~_ʾ^W[[_϶qzG,'#jC5ycCw־k6\|wʃY+<6V4Q΄}+uJ*b>Y[!R*v2hLgm s Uβ4v֦b <*y$/+eR9`K|oŧ{=?GB8)MpS%ˆ@41Fs) J NEvb_s ?MH~ N{f-%#t޾y"r=3ly)23hY+9!]B<6] /ڟ_2%_BGޅiҡA{9W>\aq~采Q;nuRM⋖2$QVFD-9KS̋W3_hсn!iйu:I}Fg6i"-ր4A+n\~ݺ&dz4-Xy{3^3LMg1Lbo@&Z|>j\M/CbXD8?_4;s,[^'EٍpBb/u20@lT`!e |y???&g#ZxJiRܢJQ ^k}8qRk.X:lϊȼTD%wA1Z+Lv\5rnK}2v]#v{_̷vs|% *8_y*x*{7LyalUeGFQ_$Ƞ1Y-2%b΂cHkΣ)5:RlÄ3L!96s8糗 e8eAdpg$`V@j2y;e@FΆ|ua=w,7vOQ5o^ H!XI>2` og,>?^d[eKeA+tb$L:e+!R4CjP &'I[58M JHGͅB2LBO ():rOU^%W[$\G2=%/$S)D 2 spR0H1[B. d{HEVʡ!z㽩(Qkl|\ee XT<,{Gl^D=KL;93m{K04"BʍHS(V'/㰣i4]Ҵ4KCH(E@BlD&J+-viU{eJ!)4*5*Ah񱉜&E5}HJ݆= I1}Hz"+5.UN8Br9qKh+ͽk= Q񲮷X)b[iK K B,'D&$LdNMHG2NF8uQsdW{?}bSO#٩V2Ǩ[2nt'N{Px ^KQyg%#NKl8EL"nLEYASV6e%-1K/gkN3DdesBQD'l)2bm*p6"K` Y'nCBqǤrkN߻62)pu`9̑{O=]/B׾>U^;zw~N/)٨9TpEw5ߨMUK±اl3>;|7`:j^ Zo.b vqCZj[5yٮ|C Q&ctU:+̒0dTd.A N"w+sK_QWSa~HU+ 3VLN?V.]c'bn3dB%/L(4pt-S2d`R\Ko|9 ڽ)I3?& ި&0VeW0hu/Y{ƹAp4frq~RƘOVsE$KZ5PACj]Y/_ `o+P&+<ȷwV̬ tވҮ\c8XU&ZqPbi1K σi[9G߽zw#jоUqhH'u\P˳uO)kjS QNgBJgJw;k;vfӎNpaEg,5NG2(훣ٻ6n$% ƋU='w}*oiR&)J5fHj$SҐZil 4])B*S8 z9̩{y VwVF J=>XD ` ܙ!ND$4|$7F+B+qA?cҭw< ~S4[1]6[)m0x@4prx†ʴmإ̋L;B`>8Bڔ?Pǽ/.U8ٗWQr4xbDS`3Ϭ!@,5E%vWQ1HL451sD DФ69k5ɜ ]ix eQL<쑌3ݝ}RtM+7%s㭕{p:D-.yU{2!j` = 2D HG41-Gp$q*N)}ۧ AVw4K,g5ΕɭӚ[[g糝&uzF;(rl7xsᒲ9D{saw%0UVn8;㴔U])SAlƠ^$PG[*$'"U !9G:$NCP!:X[J؄$63Alָ5ؕ E\(z.<(-.e|Ozf9 N/Oǣ3u8^0yU @@,ZB ÉeACcs5O͐=A+qiK%8 A1TBߎhF&A+sۂ8c$\̣ٖtڪeV=k샙"W1D3AD FEGp`#IJNЭaRY2Z9 Z<*,2<QxD Lls[8N}E1X5ؕuˌ{F!NRsw{5C6 3&J(DufڎHG X<*I3P* >̈y?Gj8/uLsIɮJp`;R /} +# Rkt N2mGsH[ұ#BvPz=G]}9F6:|8m2IAp}E?*d!`NHgewe_^F[v2J~_ ܗ'4ECtդ3ۈ3ZqtQJK+#(hα=j/?}~:B@A!2#>NN L)+_H -!A[5qjпy2pB6Tˍ磲yFl/fwl96طK_@')9|Z€!&*ܕ~~o[h_tԺc[ (dZXw@.~{erh%KO>Yz%NvFq>S,8Y!5עl5$l?o׃AS |:_z@ӷ@O?NVRMlK=.E/[+-Eͫ|럐yl > tɵ K4&w|,~#^:ߙܤ"xl b[,eG%-m1iF[Qcݩ`a<=fva/z,; $Rpq(FR16Tq7VgS0l&*msֿxO/iKYĥ쾷gMGi!^}e̡4_ӰҋPW*dcl) k4y7/ξ-%7sak+ 6s,J確{1AↃ˽2_\OWrF2skY)T-\tQx{gj[{k|V+*].vo]JhJl85Jx=^Y'rj5z芀6Rzіj~ݫ 4yLy`- (H'e)y w5,ӂMkcZxbZL)~O ̂ӟM37|do[mos9K,ьY5m[p"Zaf1jkB(-Xᇥ&#؏#&+ehU=)3ި(ZG4>ukF2<%][3yQ! R9ϼT~&}m`=4)aDJ4'9UV{nn-}Ɖ&yFF֝d%ւu X.W&׵82ҟ|)`0 b(BX^ fEh6Gx-06df"ݼ{5ޗ*eEIj0[^-/,iDʕ_GӳP \{ ={ɪ͜ɢY=:y#VcAUv(n;7JdF+ͱMf? %MjCԡJWkak2\ޙ0VcR^ ]U hg*+t?]e۞:NW4zANA{l 6hk_DQѕ z]"]eZu+tUF)yOW/8A_w&3trhA;]et &K ;`+eW*5Go]!JF{%ҕs!ʀ ]e3 2J{zt%)%gK:CW.T{F{hi؟2J_"])Uݱ2\ٙVu(^"] .ԑo#CC z߆m=VfyO"eO$P9nv4ZHk N/Ls&nw֌(ӥIdew&3\]2ZCn@/n0;DWК¥32-NW ]ц]OO %9+1;O)Λŝd`74;-R[$]zlSRo_n5/tw]L6;oȷR֒1*>'X~}Dv@T6ב`"f7ɟp9^vگf,(U`}٨dH ӎu2zhbDH|Tq|!$%TP-B'J _G*Y^I!y,B: NIJ/2 R+(:rCN4h0XCrX8SBj+Xf]?>[pb4YIMbn?;RdFͅKZ<&jOUyv8_sje|][@G1YM/*6MnjxcE_xʡlsAyP\벨47AMtA"&y^KCI%;. sZr/knk|~ٙ"$eshP>JI0 rGu0g$LBQ#ETIEXfJ%IhdcTZJ9w58 ԫߞ:}KMT hyIH$dZ GP㈊^m<Ұ7J2UU:&{S˿1h1 T 4C*j]rHՄN9iML-~R'8?Y?C ;RR&O)]S=U,MF;ήyH%!)Qł0Nu# 6AD@{A鉣ŞQlEs_O&,!'ÜUuj3/qYW 9pY^&pa:7"٠Lp)X >EL{a(Ӝcq`V^1^ψ8#.g*Ǣ|ȎGYʷ,jBL1j-".L?1Qt7q0Ǎ2: D(R"BўzO/E;Z]Zxs|X>/QUnco|4$e=`4:Pڠ ф=@^Dn-vg[=I&Xn9EPCSJRQk"Rb RmNm v_2hGF4Jhepd2TŰO v-"*]Z0FxF2!B31R*^4~Ym\Dm#E8w: sٛv,m EˉߏղlQۊ\D]jt_)"fY0MCcNyYuH{t7L#e\Nߧn=|~i< q1ؚOźxK% )-Ϣ٬Ẅ́0BG]E.tDH)sXctxS*xNok:cj $./'-@fg؋Ʈ(Rh;tb 9ǀ$=]&#!dnگ]D|~6pB6%;@XK75ECyh+)bRuIb7>?Nh~].{iTӚjwū.<^珽V'sw C~l1}3=$UokU=[ o>zܿ n%XUw[4Sj $ f(RjRK w)=@'TZg ٴ>_$Qr[U):[)Bڝ1@ل쯏?yJq~c[doHB 좉ȌШ@EG@ 8Xϗ 6@ Yp=%mRYe P!t)@2;]>j7@kA]<և Yw+^O2)`?0'bVyCF..91 ̺'NaTЁ $t6dC>@ݔ|i%_w]k+Yg8)K0hAe% c*YE W(@-x,Tf8@dMQ^%RǜB(H"rFW$MJ hk!0^G">m4maؾvL|(:9_5l bS(2KG@ۜ#yP`ۦqp(-[@Dx]dWL)m m49:'|jJ v_; K6L}Rh'[Ѝvl6ؿVȱgz j}x\tиJlbk=d6;bQ.;0UYJ%FQ#\d,N債Y.e_ GJBa'36gfX/lB/ܩ/廊:afg:xOdrv5.q`d2hS} "(Q(kA-^hL^S.P^J0JRZUKfW E-`c7aϦybMVڱv;xi\M)e\!2.FSP\ll91mXc]HfȌ k%JL"YEg>?1IO.nLaOk8SuǶ7>'*iS&KbڐSRH^1-2V<| B/-\mT?sLȒNOj[cL= gݵqSlwxxoz YI!%2Y`3 ZzPH!-8]ŽРvlqvp.``WK:ȍ5{:{0rF'Anw~|Gwx69ٴ;6H{( G1/9ʋxH{M#]^{t/cJw1*)PQgkA2nP3f A72E**5IM‡P~qPTYqط☊2nYjB?~:5mr =(jA: :TQ$@5jds|n=3W=נ'ndmd 4F J=A6֬c>jic̈́\:v}JN 7UH7סۡRJ;ss[^lH뉐r^)#@~u)ѥT C*:T.vYT6 Y|Pmj.DaHB aRl.)j )IEYyϫ\ѥ($j[`=C(bLڱ0I$D= NE/Qh%á1݊FM!{vLuѾ[|l8Ez4?DS;i3+iM*ϥOp RZs `<)[qy|9,ԙoo[TQZ(.OU l E|%fG|p}[؎V˵0E+A <c&$)8ҤLpF8I¹`}0)_Yr"%ɚӍ J"^V\.N>E'Y/SRYh ¤.Lc 4JJJ')\S#]M9"5Bieؐ4ߠFtQ-T Cjj&F|_li?lpZd@}4hK'=к{~E@(=ki7Fim[' N6 pn}yts°vt`\Рu"ID-wo|{ \7bqMN D0{(43E$d̙t[ X#("]҄0lII`@$Yc9@G bG ̄/x,En]L 7#(*U돪$cfgWx?8p%5,1{xO x54f53Bz&5(#$ЋsYN/lFq쒝 `w}l[AXJ(CbqI8OE#O8UhzYŗҾLyFS#TGo{4?>L/]\r}.Ս0xZ&jۥ w}˓0 M B 5ηC_;# a0;bYx,֕l:ӗQM:p ͨ⧅ԁ _.R2#g~/jsQ:Q:鏿?s?~yϿʿ~_/_ˏgR-AãEqG͇jho1Ц]_O&<0#vnހ~5݇,7|eɬx\AWu<ʋ_aO|(]_/wR<Kw!1v@`(KZ$ʕ-aHhxm!ΓJҩH!"wV2x]t9TkKTc U>pXOtX8=R20b)&buV+b)BOtY NÇQd){&uft|o҉-nQow>fRytMY;O㴵kyP~ڽ6hP h:`pEBgPEh1]v*h;k_WdL!Q>Z.QkB!T1މ]/1"D)RdO8F4L^8|ۓOvRLd^= i!(0Ubk|ڳnVw\.XS`M}#C@ TO,j'kg.s+(v|{7Oo<;7jϬ 0&; xUD&1j𫱚Rdr2&)tl)9 RH2XB1dnvW !mY.>xKx]]Sd7+ļ}?L>Ć40m>\( T5J:R'OJ,S 0?Z~=qKsHRlhS t},&A& |F,9 r͊ReQQs|As9DH>D {UMC2kБs><з,\B'A(zmRxUf&KrW\)eJV  yҹ'Ŋ䔰U!$JE5T`/O;%j22jcL:UKk칽SlX t86aR5Qi<g:;ǗWml{;݋wL{<#7,#bW}xO)225:P1d'֝1r\_`=l<85rd'gw_Q$q*N?}y`= 䵛}޽e.]3-S].RBWkċKoJ(l/=J?\=lHeue(9䰶ݫ}{TM+__C[%׷yE6b4v;Ixs;꣏T]lz'wZk4StS#ήd]5cb*͝ۑQlNYеWDڰ=kת5v߯(z^Ek]{t)W}E8pup9@UsfC-Ruanaю'ݻfo'WG_ΖL  7umi#| /ڬsXYyG:(7cMyaYBʙ-^l1x9IᔯNK63‹0B1 1J n:MQeaDIRuIYYu'9~K'{OO.^Au׶x8tev+)nZV !hmsJZ$f{hɩX9+Hk3LC/G?1sZkg-ε(٥jm*~RZJx)5 d9t~//ۛIȜUL!z ԓDԑI~fG 'KSxBo~4*zZIE +M͊M:ƕ) LSgPMvWr2$v,}^%(7(×;Li3FGvP.=ṇ 0>i+C1LE8M_w<\ɶ3ܒ#k֗՗wW5zd^N4j|⵪hYAg۲e'œتP#KJ{?ItbϞkD{8A٣S.ϯVE z~] cGZ}0MPSěO?ݬVLtP4E:+u >a e% oGgur>/Nd?9qO*up#&'./N_].yPG% ^k01ua5v 8|,'4?5LǛ@;פh%3>e~8Z|:_BX/.;&Wۿ%x>yh*C Wj %Mle\bS6b*m[R+b7N2ϠQv $*L29T~a?Hut0i;Zo=(uwEJ8]u` h NW3]}7tevzt$ޘ׿c oDWOC{XZ( C܌}itqO}ƥڣFG}"*wmni8YR2Ļae#{ϴcM$kVg[ds$oU:[Ż9i/:Y[3'=;h7ie~8/?~[+1PYoR%-JE|TTYS>;]4tq7[C49e\*1.IU(qKȋbM~M-]Ckt$q[7ܯ|0 MTCY:Yjp) AFK'B9[/ոUQOk;_+bT ɨ|U!Ec떚 %F)Y0f9xX򩌂U.I{V+ \wTT5IR]J ~;a*ir E*1GSN BoQxi v+**V(:]-!xy7Ԡ㊃l JZTlKJ|d.VK#ǒ`ژ[M%D\JI٠+.t-a6TGV1%iK+xVFT, \Sf=a)-gbh# aCi%E)ԂVZ2"c`Q/H1rlC?4Wj`XS$@F&2 aB+l=$8%XY#\*tJ7P^q1b,qVLFjlU%Ә;>B͐jPoJ.@a 2o)~v@ rF7jȉQ3<4lt,,xO:M;c1I#T)I01D5֡*9kL66_ `؛ ;Sѝ)E 4ShҞ%@V 2}!RAPS0n] WQrªIUD1J'hb9TDRp ZF%jfJ o,3>2D"Sۃ6p > VNJ IBq&#f }rn3RYdzĘ@QB%]a:i@1K`߇ 3 _Wr$I&In}zle&h5MLkT^Ei:U6Rt|i,MJ*Æ)yy0_4)Xv=!<%'HiDiy`!|8=]Kc4- }$HVZ;W q

F/~ S>H>K4ؚmh?cy[!eќq,F%4 2Ko^5B@YU[f=;?d^a4-I"}׌I'y%Tam \nSZ4+KF٨+Qt6l4% e QM\AtBÁf=0pSݴ{OSpRԘfxk]Ь!碦ƛZ TfD:8Z#@(ťpJf:E A}!FpJ7D[c|Dp=ZMfX4 8' yK^HZ6l!& U˥71.ń12 `,dEH"0DЅK=)r -0 4&(?uj+Neg,!z c#"իA8U*nN/NՓ̄lS&.:x]vNgJg}Bpzk.NY@AkI)\`W´۱_zW^%y=4?$Poھ5J^ sa*]H: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"&`v/kV::9ڦ"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: =S`05s9P}5+"s[t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"H\u@GI`7- V:F:\t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD:*ވu@ڊߏ^:iZH w*%YGiz9xD;!ɖl J~0%X˖`$[z/jJ6%Xmn he^ W& |ћ٤IoYaaӯ|ѿ2_޼<:^/S˘@77 WՏwm#A4  J7sv nj(ݬP9tK]uh2h&9N?>x6m^N+4<8 0tGť?XłcUr 6cĭIz?>͸CP?LmY_8E'1ےSW['Wx~ Snuo aFOe}m.ڢ[~7xA^&& YDČ6JdR*3ʘM],ś6C@+% }XEi>-aF婯5}}U8n/s ys~5\||x1\m _㶇zhj+%WDJ )ijw(Yk従f&RHƮ}gi;e"1hmZB;Gwxn>N;OEc*r k1-R %9EUgBmy*-j@@z`9 ͶqsɷW 7^~NNs~OXE&I۶u_n+Ax>}k(}Oi&g\:{]+HFVUG%jbjTJr L}L`qQ$S  *חA\jŭY{'6EE9kI!{__Ok r6G]vd޳`uo$Nw> >} J=;էia*I54S8u 3]kjt}j[2d:5 DCGͯZ *:5"#Y4ҧLI(+Bzt%%MαdWfЁ:D9`>PFu&,I8!y[.N-ή\{6| |{yv #ΘnmUl}1p,Zid3AY# sL(aRFՠ8Dq0snG`X8g슅z`,ԄKڻ]\6f|٤mצvyt~mgfb< vȇh;91*j$-l&4~<[%NV";&-"1A[:Y{! $h)MdVl+b;|Mmm{0snG8s_P{0m HXMmݼ/q.'Tac8K#ɊvaƆx̐VI Ʌ%#AQ \uV\! s;A}9?Ҁq_q0 oUL-h:smvꊮ)$'V6DJJE!s;"~8)\uOkyɮEOHxswHxWۑ=:7gCu4…%p1pq_p0J솇aˬ[WbdUi}[Wܒ$^( 844Z$ʦᙏZ5 ߋiE荕o@ǻ.I{T]^yv}]]NzgPxwCj?~}uA\vyx>b}&WMCh_Zϑ֕oՕ2]9Zg(<| ELme0\B'.X;=-S29Spy?ތ${A{HF;G27Z a&\Zma:% ݽgӟlzyJ۹H]VcWE,wj%Dŷ)Y%+z7R~` S,O"@8$^Tea`e8V:\v5+ (+IvT;4 rkUe;`qʚg=gmqix3>'] qWM״}w}azf[gyw=Sڹɬ?sP^G#uggequTkdvYTC;ZUqϳ wjGgɖkU=3̔X`L0hm#ZH-xV)q;5׾:\wVKVv5Rꐤ%uI ]R: x9c2e᥇N2N tw F6ᖭUwNΦ­kA(v)jЋ ވ3¤tVl `3uɻ4V-r%*tQ!Iris{^lݩdWU Vr^sElE`5q;z% 7d514-rK_'9ᄀ3Ăҙ),Fh'!T\ k%U8bn%&l,, >I FU(jP<̹}/YrJq.X fvmtq:VE,Oo jz>tYyKtz=fջ->Qjv10f#~a60x 0m>f/kl(+뤔Cn:r 5'5*Y#-u{#(mi{[܎_ľx|~`jɖ' ^='EYhrީD @A%+b!U7ĢDBR˃7{djJ:'c0vUۢ2 c#оjQ')Yld/QhrvܬJ7x7g]^p{Z7C{^\`DVt5g$HiLiH'h+狒+l$lmgc Mu%yC-h HNrTOx|* m*#"ܗ=m9~,M_&bĖ< MDshRA%N%'0X2[94i) <41e[ [baPb0ɺ$mϸ7r5_{/o9+@^L<Rk"w6WꚋXT}ãnwOϯs2g1-q6 yd#W&v3z/cycn~_|:=li7s9=/mkmfj~e~T967<~+Ά\i ZnmBas>W;?=Xx烢Ƥl>٨hm2`s>8K1IIϞ:<_;3p8A[0jk4Ě@J3B[%DW,-Jq~;P:$yb%l,hu\ۍڿI{N8r^JbL, ]L+g m  2h'ow<`(Q'  t֘4F2I@"d}B?~~u&)C4q+~gtONq?\.Uqzfţ/&Bx*$dfeD6lMO?gz9?̓T=ym@ ާ# V2 IY90adRNa =AUɭc8~aK}#ƔD)+UlD@!FD4E@f"7_;֛ТCf} #@Qߖ8O*g"{K}(*Aw˸W j5\ E0 Y&*bSRkQWp u{۽K & 8۳p pm/.Şk X2'#vQFTΨNnv>I׏F.dwbQjP)պx<)sGFP BNӫt5C 9nŇctvY6@/X jɍMM/Soge9eob:qUo&m=ảU?/+ /ǏtluIӏcf֩oN/~!ӻ=r8㮉V:F?W(w y(=7Nv1e~7C R<$^G<]|"#/ޕ"x>{b۔{oz7wb/*AˁOL2 e;JV#z2O}YlL6YgBj/~ o%pa<ԅ !zfWbt6d yIl5*9HF^\ /tI}x恩c&ГSV]qh_ ZbNeWY8(^^@,'͌W~y^{Wu3NE^KlTw4T\)/2lyj=+l~n06".RbЁ,h |N(P*y![g TD](h" 3_A9bI(sBبCFQt1H*>`7(Cћ ` $L!e gY=*=C'2˯ 4}~~j|ۦ5rQ8g فO6}V0o f/W=؍ v컒fߕZwˡ7[yeZ T 8͂kc xaLMN Odd(ö@DߓZZ Z!=Q%xF|P 4P\!JJ.!-HLpvF[((,PQHTu"M9D J;=[J$ SvJ"랍ٳ̵6= J6J>Rî2ݾĎSn;3+>tܯA(+F%)}KГOƗ!d 709ż ]j[,,I*E:8:6p,Y*S%i{A2T5iB w_ZXӯw稳X0Y9TB>M"b90I:ďh`a W|4#H^W09!A\pK`HR\$)!&>¸.G0FA8F$Y^r0X%Bd +9]""zrYҘkv5h/oxi?0JW㳼qjR]pBS룡B4 eN?AUfUlu5.-S GmFlEᇆ&m2-F<Շ#EB8(<G9+#: ooE͜U0$)E&P8O «krg~!c|LNW.ɼMc3QMSP?@GU$˷o/֗Xˏsh >5\Ϳmk`%vg)!t~ ZƞG6lT?/>j~&8K6kbI9.ޭ w aVtUi\3E uu%mt޵ZF #:d,&VG\T<}^zx޽9KR^uɮU;ʀQ/HUgV2R6>^bj1hxR#ÎI--&OG88evӿoo=Hy9! G]$(h~~YZ4ZZyҦ]/_>>ܱ]qzZvgk+@Jt kUm{I[ק⺚mQ^h]FoZynUg 0 [:0ܨF&N o28 |Ha3!+ ѫ#T됪7A;+״!9V y l(mC]J"fDB9'Adzp˽Aټ#G<0%1;dxR: omnUCD D;[㭟6/-ChEh dl,_\o[C:=([?zg8H9҂ ,%u Ij\:?K3a<{ V)V'u$փϟ;Ud&?aflkQzJ ^YJɐ{'C-35sJ#KdwݴL!WV:홭(v;Սoe dʁy|-6ƜKrѐ֥ i$38/kh=j#@"U{XKr5TF L2&zHG-JZLka:[n+mnOGyy>ptvp #UzGG,Jk)I&Xlm(7VC>AҐ*$ ֠od c"Eݒ)HE),$gg+rP^zlږd=J>!N.tlV}_E C Eu2.m1Yj>ǩ=*;<P3UJHhF=ZGHfuFIo:Hno,AΎ+x퐬 lDɂw3 Tc2#) u>J J:!Aּ Ex9okGYo)gi'QkrΞyuM1V92((6$>6HZAzT{*v54 ̠xЊ|T+}b# Zʜ,P7N[AӇ*R&VjcA`لd@ Pf!CάUP[,B{4 kAq8*U7x%`sLȘ]ȩt&i,$] D@z%E|0;tRV>H&`4g9#K9vXޙH ,| 'cC I3/*3T"X2 OԵޭ43HiK,챔 "8aK#Vg'D>ȭ[(RmWZ5F&3eUNp!9Q_Z s'"z|y:;#yfہVvouہ/HΌ qZ㴈ԥ8%i9Hɡq{j瞷eS 옸wUĕ㮊ڝ*RZֹ7 xҞ"q㮊^**{H;tݕZz`7D6K,TZawf;+կ&HO~ZW8 imvY0W{o1Xmw }MH3IA;_. Sct^ݼ pMa?ޝ\t'CRVca\2N&}q1t_{ӏYbba d߾'Q2RL>a9Pʾ{J3)=hӮ+jTOtP*P]ǏioKxR U5IH36L}\I9yʕBU,D=-+û|Zd j![MR V:'N ǻ(8 &g^ +*&W5g䊔cro ,wUĵD*])cWԁ^/˩y(G lKlj+˺I˸Iy+w:wu\*yI bwUgﮊRw-+c^`t]^Ӽzc?J6J@Qn͏\~[pgOo&;@JJED%16ELql|=A-s]]=at;F_ KlJlߞ.ߑi<7e5ӕ+62-hfDS( !+Hm$-.kP)BֺP+%wtn#Vq8+i*{<*Fdt S!f])oE†* K͓]/Th+4HyvUh-UhxZ9r7^ޑZ,Z.Q!r5dgtM, VS\H=J)u(ZaF)!s$`~ KA[Ѷ&ir䡫-J=-֦UjϢfo9nsH._n_zd# &|V(^Тj )&V3 x=Nfm7v* %䡤C%Y,#q|тTd.DN&r0 sg-9N$OFܞTx9ؿj0nYy-k >\%8d^t%“E[vĈzٕO'9k3J ƨlʥR ^WN'adHRH6pM^qmy؈WkQ # }h7=PJeu |N3!v1h}TtD0qחᮥlxRO'dYuuim>?.huS[v2م:|NEZ}q'cu m[&qLAb--r @4EQ54պЈ-P^S\]]0Vw_7i#/ >>^@_vWo"T7xmrwW?7 ?G߆ 7WHwݾԕ:ɞ6n~ ;jZ.σ]0t+pv,ŶCZ/fD.&@.&@Hkع)D`L)w2b|ʂ8xTj07-0hWH#5D}ic ag+H<yLz\OI&(\IBӮ'[m_f7{)iriA`X ,QFPB JRiN2 c"| @?nׂO87$w J١p9yn%ԓQedmY',75Z]X겁[5pT|!fiܨi2W?O?rr]NUW0]Hթ3 K73fbLPz L˸5qdsXp6MX*C;} 掣5 l]?`9YTal[Yb}`*|УZ#]\;XȽMG1k/@heԏnĠ_ k>+߽xzdiҝY-a={#o~xE#oܹ_ koyuz·<1-X}`wYdά^r[m5ݐ*5j׿˕z>5U`n7!n,|*[/171H/zڼS\3hgBfqeL@J/^!ɋ0[͔(9ʠ3Si׋9mQ(AdiEyquO˝(1>y&l_d pÔd7\!r,N>8CG8&5!ڭv C yUV^y_思unh;OolW.Ҫ8=Q$%*%2Y.C stm9,AiF$4;;ӎMf7s=E kK]_1KhZF*lh>}6ܖn[*v@]MNIʃT>S>"9l4ե޿}[P/l]/OFz%$JMztSL{?^zo5iCKxOk>s=l :x}x>W_tcy(iٚjp?IJdU{3ˋ![bkչ8G8q`44k͕-˚}]T[Su$}}!8{J!Ã>vx{F`m6T[4)`޿)6>VAU?nM)U=}J43yM4?p̆MVl)OUBAw8]x<{$.ܗa/lzbΠׂmﴊh7G3]*Eō'iŲLhx@;g,ӧs5Yd:,x]Amt; 6Fϼ!DZpx8<Bh1Ox4hgCzzf7"oZ:~j 8Gf&1Y$dQ!8meÒDrL9+"sIQj9V^@4;; CƌW1ѝ`c.RvB\^@Ρx\# w LAH=1 s$7%)(E#s5q l+l}/|716/D MuwmA?;vkW jm3og b:2xUrDrV ռSA+4dP+`*o,C.%-A)!g 鎒蘿NtG/Zq:@gkl'TmE 6 'pH`02e2,yɋ-A'n63vB[}J\$Gᫀ*xGpOJLL/k!Z9eٚ8tXy3\dR$cEK =*./VrW)Y.=8 kBi%Re}gH0)dK 3隞_UadbqDcvIqLd.ᚖ %ϕbHuijCE)LC䔊ADʸsO@TZSLh7 ]}GIXu >Mśp~2J}#5K+#/f}fͿ] 3% zH"O2xא߷8N]Y3Oh`[2Sqblh)~60ܟ<Ǒk?fVV.GtP&2cq5Y$7]҄ :>:,g]rOߌP={ ,>_5_'C4@R/m?*rqUuMLsǤBpzx]sW7ŽFvn3/ώ^^gcvbI̩dG'GJw5$?A>>zy90r$'n$PG:]7X7 0s}%FL8t2fϮi2].&zv52_֎*ݣ.rݨk#dݨ-M|F+7T wM57;xᘖ_~^~˫?=ߗo_ 3ב ak~ގ / j M-3 ϸ#c1p%:$.uk@Rk~:8}2%rz͖+`ԤG>ϰNGߢo4!0f@Zb WkosM|9[rH܈u}̙ikKRpoeA>1n3\ipTiym=a!v<@vɯPl"uHF$R2C٠Ttڱ9^5T٩jkb5ن9N|rRDzV ;OㄱНy2}@FfgiW;1u/|`|)<+u3^LSQ V|p_OyT2!TJ<KvTkL3$QjgA2nX>hyX сr_!IA$s8t~P|ӹnm9va.렙>SMEAx1E-6x yǦO?Lbuյs\5$tI_KAW4\C<6P!<Ȗi՜ CSCoU4(H;odsB G֦Tj"0% ǺkfBβtJ$!8 cҞ=LNv+No>*,RVV#u$.>}TxzB-*1@˸Vy/,YՒe 2H-,!(4 5 LA+8quwnl~ Ko4|MK7- ޭuD}z|;cFsx0fVm*ut+8xu[ڀַAJ18@6"fQEVN,s-EĬgY@U1ུ $C=ϲ[ Υbp!fSN ls[+K7kdZ # -ђ}lp4Mr]Ek『f;+#T5U)/BZ NoBir"\$ ;b]vLP rW8]rT"!,GxkIW'8r&*Ek{d1&(K;sB87AA"ƣ{TO"Cw)rNW C%r*]n}䨟NUy@Ul #YmԚT{f˱ &V«YR Ġ5SdFJOj29Hw2'۴޳Rh8-dDU0A8QL:F% -3[]ogȹ'^.Z{HRF ҫQe!H($p `4ɺ(vh Zځo.!z;I:gbr ψMdR)۬DHAjUZl4It+cW3C4gÈ\P\8'$ZxJ$U"بw$iQEpTfW5dv}tøɸrWbFarr\X)M4"H1dHEVrK0ΤUZ"2uoYZ=a!"~0sG(mA:5%,2{c%kt㪵B6ݸSYQя[|#.Yv/ ; QP CXO͏8>ilٴFSƔ[0[ЖAqh'Al$D퇒cjswvܶs.XfmKbN"Hp,>臷H`[ zDo\} ;@RrМ;"(F="{0&l_7_VQZe48(# Eg>PjYN) nU`Ɖ k"leoo <"b#A` vUZ#Jh XJ9}g NJuayQȳefaI[ov5y֕%^=B'a#wߩX“bi)')cR2`ĐICK<&p%2[\iFWWEoʓ)KN9*+bl &͵+TmX{V*da,Ted%E 2^nӟ ~9yūՓ_hr1:};ZF< R AG-́'>CϳZY$"Y& YbJlieb$J c!ER!hHFd2v̺,!8#fF(ĮFΧ%v5sgkM, {gdJCBLwspU<l%F'OIIIG֒[U=+m| 4ZAlIjoԅE0FQi'idɉL )D Qs#H#E7s>H:\s%_gU\2. xghu#WIDd4ϓ!Iu4wI[K!*ha}Rq6Ň;_kul(fDزRD3\5潚r&|܁9;4:otqE C`~YS9İMZn8 _:OҬZn_$dR?6P, \9 | oI}ʺW߿ߏ'5[g?W%d\ggOʯߞt1k[.k`Dkjw*,'  RMFSrhZrڣ^(JXr 6b%!A;`S]X_.H_(g1JWFk4a*JKx:g d1t_$pP_ع1wv2؁ i dHoVel,l jb[]Ϊ<}2+++̳ɳjwkx1ZNnR28EZsdy3{ljPP֍rO |($BORj?`_P=  \%q%:Ju) ~@Hp U \%iwJR G+Υ)hÃm)uZj)^4#w8 Zś^<+񤩇o?̾K Pq ̾VqW U^L먇d} 2T_x0A?lg9lp.bI0WUX%p<+F[2U:@Qwj'[JmrP~0!) I]ǐfMLJIÁ$RW @b*IsW|GϏK>:\I`{ꑂwR\;p6F{a 5MJ((XI՗.ǃq*7f*Jtɛ~eSL)Id߷:_dO/}7DeFY'tT"ewk\q}}~ݼ@fb`?7.n h7jc8sFϐQ~<@Z* G9ѣJ[ZR@6joUoGuRq 7NJȬf@vf!L,X?Kej^<55?==.^Magǯ4EalvڲЎVG&fYiy0RjI`- CjI]GOՒgR5,jT" N+rVm3Ƌ 3%Gi3Ib2|NGdU=Z]彈J*MCɭvpAiMJjZ?~ovX >Cϥd{>XtL629Nh}U&iAislUihϏGDB2^' A )fR# We$'f6`9[n}Ok2\\Oym'fn''{knV-v=|h~Z mKC3:KW>=ʯAF+n>0n+Y &6v9]$W=M@jkZjkZjkZ-nw^ Fi.N ℹ8a.Nℹ8aT[P*̔V8aT8a.Nℹ8a.N~T\]%lpn ! xI%8 @0+Hʜ0>SP2CK[Hmdَ:}^buT  XJ1EE4b`  KdݓxLG eZ30{-#MFSx>8䯄ԯ4.^A3:;`.V]= qYebKCx)qvG* 26;U9F6aUC IM ]QQ8`$PNbXXH 5qj8M!0,-xï@/>zfѓ9-rE_/qWwlxZX % ZojK ^Ȍ ^`A Ɯ 62&zdliƶX[BpXx-Q21"M.Vaw}qo|' J >wN!IF vHkK Ȇh Ā#'+)2ħ zk4$gx%JlR!`V0/2o.D,enMm}p1[ӎmQ[2vf4*W#^8v).GXQ{(`T9(EV0r!p{20C ~dp4H$=G(XGL i[g=VN* ]0 ""jUFČ4>hf]rkŔ{.Aɴ "HpNuFHHqb05)PXobA% 9 H\Jpk[g="~> -⨬kbiɶ[Eq1*ǔ 0Ubr#Mp|-%k/FWŃ7W/`̕SS yT9ؖ,xQZ\)a™evm}jv,mwhYǔ&Zt`h H$Gª1B[ #FGKv^'n4 -ke;yV9dX:hK',ZhGT 킲&h8FU#D8 y Қ)1I CDpCF,sRI4 GZ6Yo׬w(kVGvƜ)P,F5[eW; t|tЦ {_޳$6Tqj{PG(vkiIQI)hQjc &zGi+LʕrEM+,Q\tzkl3z$%[`ʋ>-P*flt.rTH[z"\4bT?fll%KkD(0Z&sH5\R rHtnJqTQ(u: MDcZ)Z+B"!&" t9{<ܺUoKVʴ0СO$ z4oL1bj 4 ^1@ f<>uK@?6-HGZ2B%Ui3L?~b<& . 0Z!4,`&p+@DI$`'YT4< >3\ W] +lt4Igixp9(\˘[=barPRƑSIm M!㝥yPn kNcF>`,6ʌ :FGR6`Tѡk&z ^ȳ;ă];X㪍X}ά~E,dzqv\\$X&P2OTU9NDٕȲ3+ϑ-{rq0:QA bxKJh]Bju9i{hMmV{2]=UвӊKwjzo;nlzCǮ"4G&EsfBXg Q% C*6 #- G7yM{kKiB[2pVo>^Vl^]ը\ H?*B6Ƃʃ['‡QKbQp$.a'tV}G9AY#XK43,s&p0rJj(/?nv@wN#wgDŽU-ӗK• C*J1f!8)%$*q2-4A A88+N2x 3g)dސ'┼2H2 h D`A*M؁mQI+e(yڪY KzzxHGFz1DPyr#M*j<8?&`218'6F9a4)5ֳ/1yjf7mD u'RԶg׵l.dK| O\7\ξsq K$h#fR cKpK\N1lQk&2Ʒ9ȋK/p*׮:p܁|KdbQC)W$Ȉ6t\.> o|'WO> ,ҙaos*+V50^HI_+^)GRWĂ'hyR;I^:a<tq0<fbh.ƫt ^79.Fr>xz$T}N 1eJX8J sbKʼnzPYbekch:g{ ' .҈ z]> n_*iW>'3=٤{ͷ^nн 3%hLࢡlB9 R&_*' ?ZTEkJ$lV)岾XN*cnz˝5Bllndߍ(Qkϟø;2::pXٻoG/ԯ~quUotQ?q߫SI WuQOAIK&i*t0OY@7q)^^\r󔊹/\z痝_A:U-c 3Xd s]JX&@§W;+>kTI-n{O3P%!04tvĶgxd;UeS~sӿF?]o9W|Y6U|epd,.I<߯zؖlYjrܙcuzXU`8Ku9\}Q'6h~8Mם[BI%yM.i0Lg pkwBP{ZΪ<17(ߌwvmڰI[Z'rÒ-;Fԗb.}~ "쑃B$!'c#dA -` eIA'=jۻY3YrC;ʐ (/[/Q[ϡ1$dV!XMf͒DrGEtJQ^'Jp vw~En;+[qvF)YQ}|]zv ^W=6uh7x \iSZٓB?N'[+HtBSRNd*ϐqd~y? w0u{ѹ3f,B&xfI#E,,1핋Jg7tN!^ȭE x!̃ Qy1 Dץn5E-s3r6wP|u~z6jKX>}OK"i w[с_{uZ*Yl51=^}" `ĭ]o=*gKI䐵F._xU|KF _5}U>NjL$-'l̓1IK$Fӏ|針\>,ZQ)s2]CQl""gBϿW^6с(VC" [%Wk`!Y[Nv mzgu`lpyrq<Y}s~1M{H Av1Ӡeg]{-Rwڃqr|1YMƿ^ ZWk.N.(pՏU،;*kÐ;>Cz}*?VtDsԚvPK˥c*IM΋L@N`"I+JWYHNHLS>%.R-yRB1 r hѱ:v;#gismG1ۯ,B+2)v\ר|vz難z9ugk^˕; 2:[nus&5 B̹VH6J~ ::,(w}y)xTxr1%x͊ZāqN(4X)@g $"y% 0J߽Wk*yg5?WO-u)k稢5(v2 YLA*ɡ"*@#/n߱iD,s&c0B2 8g!fG$G$H&8MlPp]儽 u+H[UEN"E!xtRH cP&$dt g3$ a~wH\j ?|.'5R=q:1$J܆/" 3?Ka:/jC!g_55ML?/V|&&G (Us2W^=h./SҜPxwq=~$jĜSm8'ͳnl$jwtIUI'ڑt0b0[;2|Ls4ux+q<:=b|kV*&n5s% uvH}kX~~|w~pǏG\أwduDO] M͆Vm24t9o2rkƽ)>F-4Rqn[k41nN$_lEڲ6ru*~qfYNFBA OcB,a@gֶڸNU4q/??lzKr ܺ<̴5[YдGyʆ m0n *~f*} :žpr7 h]  HD2,M cszЯtWu<Ėu<]-K2wY$CYdxm%p3 &244ugKyhRyR%%U)UW3a>vU6{nuJyfׄcXf# ǘ1DI,S * ѧؗ=wRL:q2ޯė;d0+Y+E}$>vT|Q De\f K0U K-,!@2JXk2qp,b o޾=7lԚJKɋV3+QlS;7V=3 `V d;j銬g >e4\ոjc{5ѼTP8C2h4F)nIe :u $RG|,+ uVQɐE k`9wZh!ҭo{}3A@g@ŌbP/:s(HI\9om*$뫠( eYѣ>]E;n)e"A8kI`(2 ̿) h) JeNRɾ s[{y"}LgbjSfѳe+rBfd1kТdK2ygA@0LX-Q^r;ܪߟr0pUE-A4FaDfd&Sq'c$S9,{i1N8Q6Ld;&5`ICd`Y:#gC9ۿr[-J+Zy Z"zout&Zk'RtF $hi{T~';7OifeeJVaL Vz 9g$&2B6)!bCΠ~R']*Q7I\ 3," sB2tLE#iT3S!pGW('*`ob)x>C>!(I8GDž9O٠ >!DGD:m@ç=ݛXdK[ᛷ@ܛHmH_Si1OބM钪j/:/@dd'J6Qł>)fU{7kwdPR} U\W1 6%EQJSRVXi'2TQ*#G66l} Lj'".zUt#M)AtaiAA嘒 ) U)∂wS;#n:kbw6>n^xXGtT# Cz톜T2ֆҍ9 {Ƽ):^x ݄ {$)FIQ[IZS%OP|&8q𥙇 &e$%U{wzCP L#ln3`)ElC.dOd}ȍ:VTn-WaKF:2eU@8E AC~0Üc;d푌MRhG~O k/zZ@[^A}88g;,< l^|߷w/T*owFg٨+"W*\FfQԫf 3RWD٨+"׉g Қs֣ hX Y`D$OpZV;Koηk^ u!?33ysiI~j//ybR/=[|yik{FZh{6Z&=-]UZP /QK۲7_Om>Q o,0!5Ѓj5QZZ>YgͺZsjYe!(Q}jqHC<Oχ8 m1md+PU瓳|yxc^.^tTkKilZ ]z02<;Om䌗qߩxtȏVWyʕBUYD0S[-_8/ECD$3*VxHN(^E nN.KL){}Xl*VeVI)Cӕ1We"# lj6 tXmz>q'"?|NJׇ_G ݩB}>gZ侫BxsWWzW+> >kːxU~Ժ'RW=[6$BŴ]/]D^~q8,l2 j =1]ݙYS/pWsWz l<weTUVcwWYJ);w=+#Z}.Jє]$%~kG^-RZ˻y:e`*/T2X+-T"f]q: "}_=7F,=4\oR]s+𢷔DX0.hX*WF==%(G0QQ2P0I+=Ԋo/(~{Z-Y^JszѻO{onrhtWԣdkrGӿ;mF/tO(̟{qgxuWwrGWzpnUB0WqFtt 5Ji1=ٳJ)/\t.H,^m$[hA6wiM[y3ef?U{dk?N#;zCS|b:ha x` "V^h#1HZtEB@GeLH"C(C7P &%B*T I١"^1Rӌ}}, e  o$6ȸ`޿JkLNuTn0]3`ڤd{Z<1yg$FZN pL ^Q O/C&{.0(q&WJl 161a*ALQVa]LۏabMRڱօv`7t\#WcIp6kI!1t2hh1Xnm$)I?LRi^ CFdIGQ b#pDo8 ɤڕ=RL&߸bvǾGܤ 1(:zOZ C>h[DzsAPBr+\X**y4Ȥi* 5#j~qR5uL{Ŵd_XY _}\0 d&˩75>pT*j2!+f\PCţXJ;~`˖&_'xQ;q>G\gau຺כ^{ֽG#BCE?E0auq*dhRYl%WBQ^c9W⻰Ä= 8W<}ܣ>%̄*xs%faW4pu)6^rr+.G}sE[r*Dŷ Y(̋<Ѽ 2xHy&2bV:.!gB =^IN\\zg& v( {V0`bKج!6;'mC{ؙ.aLA[A>)N w9n @ u4]omP@@2*X9oJ0z }7uJS?C7ezq؛⸼^:~~/rw(n28][p^ճ4ڄAm蛍ZVݨf]ARR=D0u0HOc]^wy`V:]tUNgDZ)Cb*!͡6ѦSR$*ɭ5V{+/T!~חϗ9E|Z9~,Hl(ozO$za8yUߐܒ~ ]!UIViVY}两Q}6@'z;sd)_(Ik"*H!A$Z4 (;n%6ec<%fc,v*%eKܺHmHJ cP@5cZio^zxi)q?D]x#*mwO1aˣP e>c&<4=ST >:DS⮅ fKѦ) 76hn*SULIPyN}%bP*ŘP5{8{ *T@b{RCl"qF"c>D#L:-jh), @┈}ʚfy Wa-閸~.Vk?s*o1 NXn,0y #6 xQ!D5bѕ(AH!YCs&ѐ8S1OR亏,9Afn7kp5ɥ8TwD,~Ɯc S~[ A[6Oy͓IwNoj2URxK]jFv';YH볊~YUO|L⍻8]`ZԮ ?&`0>ŶCZO Sjω "N=gjϙsf)s~9flibƯӼ N*7@7Ǖ yFt2c82B+}JT)rQJf*;.PQtء8< I=RDLD$K K"ڂL%.3GEL)sdbo1xA༷r&ꄌ6PrgW3UwgGX~xn{FJ 7얩UxuZQ>itm:L^RxptϣNLz3f3YO#+96.OD0_|7u缋#خ+Le↻,v\lW`Wχn'kc֧JMZtRO@fn7"nάdxh*/JFc%d8U)eNcSG#u+tp輨C5(3=RjaPʃJ rOX0$BqKYAp3C<ӮcڢmQXͨ~uO˝6yqI嗆1ehp|3Y"BJ4 ITIG8GȺ|[891de >lZM%^QTu! pQzA hPXC4l5L@ϼj˷<\W4r}nK&Y-ӯgWΤ9ßrFDH'gɫð~i]Q_%]a_޿8zz_~WX@kZ s:u A~N߮nZ"Dˡ_~ogA?#?mˑ<%#y׬o9rv)d Yj|SUkÝfcBLj~0_:*W=PCźF>|]9lv c]0: / ;z oןsԚC9w&ēf8ܼR䏺Hi#Hs.nߟK]Mb?&%w.pr~ufXrCG]Ba-uyȺY$NlOw I08.x.W~?4߰˳|/wR>U<ȣ_B]Nq cVĖW 2 zJ^(%\&|~}W:^nT件]5vG˿H鈶Qv=p! ޔE+\_&j54u)lI ĺt˃dԙ\yv DŽݳ0Ii}1gh>ŽuҦ@.H˳.wԪwk|Z+%ʔQM=O A 2:+s.j@n̦Ȣ.*@wxt}ƅ^`Jxot> HIa4.D ?{Ʊd /wQ(䮃{ 5~ʄ%RW([=|HG$G8IX333]OuWUVr=5Bz|Z!;Z}iƝiqitLILRA]zp7O곪UMZ6dMDM=^W?FِH̡4Nd+ OPm[PmZPm[T̶cؠWgg6 iQ. b)0DWIa.Rȭ V9y>r0pPgTѡ@E%\Nħ K<&xu~1jKswc}X".5篿ۢnٹ,bz' i\ %ƍVƑ)&(*T+ mJJO)8k ߓJ:t[aY_4v}j;}r lbD'7>pWIQ,yvɉM Jt}.%. (*ΒܓR% K㚏y"42Ȣ 6mbǥ d*ROחa]\7 ^S~2њr- q&5)酰s&}YU &<:9 R_U8] n+0FXa l,'gK ? YDrdIKU,VUn̯oc}TnpY UYs(}$$/\ńe൧F{QhWo/xhz!Y&oHd2qh$I6HLpKnb=duqXN7f i.RIDT T"d QiM1 0>͜MƬ^J=~Z:K_siA#.5GL=<)aħt570L9]&bO/c\=Li'g9LPLp di@^TRQy 8▜2)C8;:5ZJ.ÑLҹ?9\ 3f%Գr@fɻP{e36xC/dNIIEitUqiv>v%թ C^I7 W-ћ}3ivogg*iT x wu59GSʢ_4V|!$]J4LXOS<=>eq̹}8gw+.MPWMG>垥`leK(Q-Zfĺf)1ĴѸ4EOn?4Vv\g%in#aEPn]yPf_} өpaO׻ҨzW)p!Mjb#??zIGnDܢO.̴5 YYԴ .  WڄTp=9TaNyUm=iaN-n(1l6A-E"'HGdQ(9N#UN#: oOlD`<2i՝G,^ʚ{]vVȪ>tg?;i^+?G.o 'e2q_YR[8Ҝ CK V+m+VQGϾ-O|>1(0bdP N3,٘{R}a#yĉvlts9=veb{=_PW@E2f K2UIK-,ޫd@Xkzp 军1ޙz{ {^ϞZ:| 8h2֟Ybmɿ1\y2f>JY_[b: 888r+*Vm/D~h0mHJRZ-b D%yZ "J!(H{礵>Zdw<˲o%cH$aGd#"[ST)>6JAezO-ݲ.Z;8hOȂevfrOPf E#Kl`_quI46x k'=yjt -1˭f/ς{;{mˡ &$S!)yͭ3R:&#Bt8ʘ`nN!FƩ#'NІ EI3E&u"+K mJ꒒zg;k #}!zotԲiK2>o}:TVZ":gJ:g&-4U@E ii{h~kS;Q?52JDD2")[A*Y|DDFJx D:R1z?x5,̛$(Y({"* D!c$V Cf`izh8ZXx!]%~0]^a\h008A!V fҺT4H3"YГ֤U6E4>.5EGG`RBp I#Q$^ :yElR93ezh<b ㋛T:ލk56!'e` lk7K/ [%hn'6Omjvk7hb1 4seVIԊ!)7, z2t5zZ2#I%T}*dЩXURSmRJ)"wF]yLWr@OWCWFjɃ]Jܭ5լTݒ[;L:`Xc&np9^mf*+nY?VHjF+o{Bía^_ Ϧ[}0qǿ/;5t# әvZ҃3ie70<]>) KQMef ]`exqA?bqj6A13j'|;WޒUV0kRG~{6 ؈ךBkZ J%{p"*;DW@W誠ؒ6=]214읮6WCWltZCW.2ЕjOg *p ]NW DW5MW*.NW=] ]I΁w3tUUAkL骠:D. & \iBWUAC+ōiAfuw5+tUbABitut%C;DWZr!uEp ]ZtE(잮THLt \L _$жݞ%j_z ?(bmzmO^2O[o>yjW*<@:_]L,|?š;'Wp>R=={Y!Uj@,o\?&]\.y'u{IT Y(©~V!OVO՝B(hm ʥlޓ;OKB&0dݙxB()mRz/#?=b `Π 3]mf(ev=]m99Jw .UjUۈU r)*pMgUAkM骠Ğba*pUgUAkZ K.kOWCW4]"Eg}֐誠WP.X=]])YkJWU; ZNWRmeg+]*AFp7h hUA)MOWHWF\ jY p~,*@͠5h%.t p+ec{~,o~RI'h]oWе!˥kIQa3R#*ZɉS~H-YT HKpȝf8KK'{}-RsMPc=V!菋cASk=4Uq}3XI3sÎo3aL 4d+_="!N3 4˙(7 b*^ߝ],p&SxP sTzUJvu|NN7<+>JMpSᲅ_/0m%|Zڐ {}vE k柪?#>^NaQv?y?1yY' AkJ^Դª-ʞ}䦣r`lHTmuGCH PS`$\$D݂,ȿ, Bq&9;[1 [ b!cFY; ZDl ha[eD g/hJo$4Wc "]^1`^ .A줹rjX}(`_^hoz2۪f,^tṙ tggڅua?:{ժ5TK6 WG~=aYntn0Q_v[vs^o#ݑ(Zbr~w1|ZïUZdGX32< M)( gv|qT۲%P0 $+:mqkrTa=5 ;mNbV/!cqKo׺ 9~. I;J0uZz% QRNiɓٸ:r;Ղ&;ƹb$!L!azR0y]S[;kT6%?Oa{*8ΌB3gèћa3T/v%@V2OxthCChmK'bm-mjlj!66*,/@}(FQL3yY/mF͝cyWymjMn+FRZH~t$5A|~,uVż >&͎QN Do Qgn48/NxO?ON'?8D>/~z?`fp¯c@݃kM/7H_MC{]T]}]U}vyCGEڑ[̭Qz2ߏ*W勧dUز5AL3l-2c]K+\htiܙM q7 Te3ujuW}}hV~w?)Z%q+mq1"$מlMh8q%Bb*>Й֞t \3w[cFFҥDF-( D" ,CFcx˺Nf=ŷ]'5&b hHz畉6sK}Vw>kg1(vt:eUԝI!VWڇrj~>"g4`+@[9p[JL.0"!|Mieݥ{%ܻg'YKlq(}DGǃ0M4Bx@# ޲(7Z 7Kyd0:SDcWDpCLDK9 Q ƑaRolI`~%л䩏S0nYyQ ܼzg{]y_;xĂBXp! `*EQ(DA2DL:F@=FVVmנ3w@*[[cӋ(ya3=7߀!7]x4%r3q >aV93XV8d>x5jCy5`^M2ǽ-AJy@ `3pu4Q(-w\ (Pʍ"VFudXDxaREbD hv`t BRwCHtc $s+u/4`AS1ڇAX<aH;93n@ ,Е)| 39``fB9Uif>"|7{ߣE^l.#:KfRQDLD;O:yD pd0b)4|0",DDꥦ0+RD{FcO凋$3d::o9T-;l]w|naSg%0P ~kcy"x0L0l5OSG))6pxR<jӘGRc,6ʌ ;6`#*$;A (Ut]⬯@e9;e_~77m)jY>g?{sIM!g\IrG X`89~&ϑ4@n-YxwdVӄQ8mLZ3I@f҉[Uń6/@jOa8j E-{AͣY9[iݝM{v;P^y<$o3βϻ\;R#VA?s[sd9>z^RǞOƦd0W&~'wmmTZsV+8TǕ󒪔kbAJO@J Ap .ĵ%Q.o?1MX\ Yn3:,/8+0xk박5wƘvԦm7xmh}υ%?ϊy:.Ɏ5j.^ Qv*Yy*v}y󶢿<2MEM;꘮ݹltv3jN$hMIJ.CiGMr$WJCc^ju  lM:[fzMS}9LjNj7sNfZ9ڑN G<47x&ݫs<,{WZ;t1qڦxuA%ƌT(0E(y/[:ztQ_mQ ysffd=GjjC w&T֙RׄrRhmm23 zUEcgELÆ "+S2QRŪps~ugG;o1[~[2c3۶(r3G%a&N6x6%MjI t5Szz~lglT)k[*VB5 +1]T}V7.Og?iwڏBZܤhoyR{Vea6VYgbkVPrQ ]cRĞ6=)~8M&͇c!B .B XFqH`)2ݢ -Db*ME4>mzy$kBXuR5 eMbb-K*+Gժ<0IdҐ}s։}w6N S,#H@"[PF"N s$)swO6Q6.|RR@ 08D.9%[zaSRRRnԀ )PPL@5ѣ N3)&9 mhkRshmmZ4'B-r*V5B0%"NG6X^S ei6cZAIvz2a۴FeGqnD1'r f' d4 @ԡmEY5c@ZW|.1 ŏ>]/qv9f,r Z-*Bsd-aB6YWmDD1wXM 5ncr"\/KN]СxٚjHZc&VBU^Rd%;# bCuYc4iI]1\|R8аGV48vZ ԁ [Zx3BsJ_<߬\GL jJRj+@Tt4̾ZoH"r4((:`NyebE0WmNiFC9*Mf)Vd^Z4窍eg/o&nңԮ;7Ǚ=; ƎLnA};22] px{ڃ$qF'(j<0:ICH=Gr={K,=->1LE={lC$G7Wʧnc9iiMA'^K9mz.cw''-7y9y(ȶ_~9;p%!~7~͢Eo2?3JK.LpڹU0J6BW˅Kmlԭ[LR9R t\5ݍKw/_ "@Mxx/{:@RX3 (dG**ٖPr &T O,h]R4^аb/,ZÄ0&OmG7=9a1gc-9˩wS9HC9r8b[ƒ}~dM wl#G>#X fFcDų)Yۢi畜i#ܬ4} yšsQt(WC'\5-j%{mAR8;w*ݰf싅3 >jƌt]Z]v"R|' Po)IF'g0FCPK:#W ][L&eQ<1!{]ے 9{!%,X`zL3Vq5Ts7bbNga'3j3Q(1a1 /!SXI ]L<e[Jb#{xXǐ|<,̰W*E%o(%/JI*z7&x8:a c_DΈH3"Έxf\֚Ĩ+Ȉ?[uvo&RRls?bJy԰SAkROڀR팈Ĺ?*QpbXהbݴd_\=3.ڲނa]6ٶ~b30{sP*f\<.NfA/s~xx 2ey3;5c wZ՞lM^Q9;$ڏMr(oφ7~wH{&Tv-ِsaUwx+^wÔF1-;Tߗ+i4`ރyPP͘KBYڰ7L d|RBU|mInjj˼9:ͪ7=G7q}zh"El%m[BD69q.ϑSC$Gנ'Fhg\>F6ѬrMT+= W5Xkś }jNgoj'r]u=E:~Xd"[nv¥_OOGmsVO;2-N)DMݙV>f8}۽=@]!R`t:SY |QnBP$cV^2JHqb:3e "]Z ҷD9mu!Y%\lڹpɨf"cDmУ7^KCc5F&-.֝khůxlG jdbF1pS板?6K?7 oOY8L{ٯ=bzLqBՔ "NO1%KzCPQ% 1t Xl1.SȤQ f!x u%{1x.L?|?L)ؒ/9,ū |`˔L6A,m-#d,ũP ѺJI+jJ|6ؼ=2}5N vz@ۋ&F|źV3bcn :68e}_h}ú>MI{Wx}Qo}G]j|~?uqOZO~R"N9ߣ9ύ]z9"j>c+:\5) •ArUhઉXJjIgvʒD`hઉk]5iU WO 1U'v壁+:\5) •# xઉGjoҲ:\N WO+(uɽH΢0KyS@p-t&Tb%wIW!Ic \*hztUPBW悱]\w©j_Sp)KF+NQ7J-jWZ]!C/;DWNIjv*(ҕdh]`;CWV]ֲUAwHW6CtEНWbW誠U ^]KR\1%*uwrWUA֫)UOW4/ ;DWX *p?vGUA rCt~O>yo;tUõ]ȶUAHWoӂzp|ݷKc,ۯVly6Q?{8wOz`6U':eKU[*+^W^V5Jdv)+SCI; ڷ/;^7sޣn@a%vrϰyStUj*hm骠Ğ;=?csNNW>3~hDYP EW|=]:\hv `m;CW]tE(7Wt~JHmxU3tUj*h J!]I]3ꪠ= ZFizztR0XcwrW2*hA JezztT3tUbgAqv*(ҕmsa:CW̓MP zzteo0J*p3ֶ^]wDWZH!>). VqN/4K|cֽOj΀/yc=؆EYxޏlG^ ۽I5X=y_FX(4?dr=OOՠM5Xš'7YȦ(MϷ>J7o~Kk+w=ξ@Ŀ٠RHlXgBWh3ض Q .6..k3~F#uLrsKp<;)V k-!۸vpjWQˮ>r3ʏ(P^J^ОWHb&X4>CJrݧE-sq`ʤW̪>>^zhvk Ot?[~4Is+g~t~X/+-(>uԲ HyD@T߮`7B0qaK_n(m߮ a58?1 z=,$PSFY䠤vkQAJݕ=UL,U1}<#٥n&hk O!jWMR&zdd sxpHNF(㲸?^mFӕ] f >VVSfxLjC&O '7/[wA ahN@z_Q5u|͡^e>sgvV4г8_c&rqѬ-iQ7!e-w0y9EM|@% Yd&3Swc-ƪ0kw4f!z4727: %U}Id/UD̮/WeףO>|.0 19؍I7ie݃QŹMr2mP?[җjx>ߝ}_PZY5q1 jr>9G3FJIf g|4lcJ b}z,ѼKG7R랎烺j,20[ /:# sCC&u^dT o9PI? *+ϹY'Q+ޗ4V%)E%)/Iy^xi ˀAQ͑\RGj xsLjf[0H#H=I91%k"$g"DoX6uF73Hn`۔e_Ʉ̄T#h+vC-s>0@V|*)~2} e w)q!9Q-}wY/J:&А )c+[$H<,)Ց3,\*H%m|2cG&e$%+Bb|B@m dXL.pn f:|fF)"7ΣzՁڅ` * ٨M>s=pHWIɚx_vv& 9GFǦ)L>iNkoXÉmj^Hh~jPVy@`P)<7 Ʃ+tVLU4kP`)֒\l,Ϳ+Nc?;_CŌ֎)s>;YuP/O;g2h`ې06\viZdŸtZZD(c1hV q@W01xSoZReh;l$k/CSm5FЦ UN-Ӥ$=F|:g hJ"Z#su֑:k}k{vk3o+1}ge-0(mTy;;ԓ6^ՔԻůŅp*KUMv1%(JT$8T쌄]۬k(uzg AXj8C̥dKf)hCK@q_"VKI,,rL)zG,F4.{ҡ[X"e,qX 7FS۬ kv,/ ƖDs:Kk1O1mc ٺ`C&V ^Wlh.)8;uZ?'YeL۳1;9plx~NhmMM﷮L%/Tle7S2VaS[1ga"cģ`"-e Wѕ[GR&ǘg%㭭YrLXvL Τs.I5͌aecZ1.l2B0 ߔ ׈J7EooqNbv7 ˚oʍFp6;XF> R^T=@3 }JheJDLH5 tJ`66Ah2vb P2#0c7uq1[SSq(kY+w IP&P&SkKd-9:xtLJ"6d%K6ʇYiェI KRĂIדp$Ǭ"`&ݮ18aԗuaLˆǡhfD3bψ/Y|BmBE$Tɀ+(<*v']\$`~uNj(-gL8)\͔5@3>`!餹Ɇ@#J[Ì2P{Yt6:Cy%{@KH֝"6Ge,h}cRpȸZXdؚ›w8߀o;򚓻<]sN/ ico,Z ( w(OT(xجVG&ue*ajD+)QXǷ){V]_RԻP]?{WƑ_@F `,Nvml7C"}J)R![ CRP1`InvtUSSpf0|벿!ˆ^Fc$!n"A֠0+Cʘc&K}th1I˼]yeL|(|Mㄋ,edY0GM,+&t[MՒoAw05YжfOF (}Ƨi1Ogx101?7u"T7ǞŴK;ƌO",E.|x89l~Lғax?QnTQIv1EA@(Kd6893lB}љ/)WPp!}oA 'IжCˮz{}.lYxhGo1TȆR䀘^*81VTMP=5;ɰ? L~6TM&QAeve(ݘ &^\f=LgaͪV &uD9hkc6;˜l/_T:4ֳ)} "&vm>[]zgmmក$X7AJ$]v}&P@0sJ%I'EC9,*יc6RJHH l@!phB99Ts5hVJ)iDlDPʍ"6ekבaYqGkhD "j͖|B&$ <7+|^􀪉$kZ45<|EV9dJ 94iGT 킲&h8FQ&Dp3b*l1E4CR(Ŝf$kE6Ff&i]eF03` :e֥Y!uwz АWgͺ_5:A˘ j)Q5n*L pc>uztsfU ,g\\s-r#$rXʻDqeQEqoQ1􀲁ӆP\T!=Ucx؛ 0Ո^{ ,3ѢZ  D, r@HXGeOb&y$U1kQR%x^#t{cl ,gTQL}(jjռ7zO³3&78;y,K# :!\cj FT '51 SqZ{t` {Gm* G1L :ϳ$`)TDՠfEwZI%pD0AP8@2qbC= 0u7Ta1+7ncyx{*i₥ gJn venv;mI!-LJN;5$Qj8L,EYleY9,#=(:h߁xkΝ%9P,V`Q:6O dTruN!YH}/d;S~~:4"Ùt{~jb d~2%KYgEPa@d!uJ04.lwM UKas 8YEE$Un;}F)a` ey:DGgRP;2QQl~+aP& *\:$UXMF7 ]wZXGgi88L p]B+TWHNx|qyx)RK9#8:r[)WJ )tJ;ɓlٴ{2LӴMj2>//^T\O~uΧkK|m $?Lƅsӟ-n/_2VΤ&v&WtQ7 !jY,} +V1~Z|Ew\s KɺYkFlQ2GC@~,Y9r *`O2۟6g{yo?߾.|׷?9}SL>}:TH$8#3_Ԍ455WMMfj] ͼ[.H6HZ0'Oȥ7|7UzGz2?5wҕug l~1 2WQP[Tt.܅/Da@]w-}R$⥒0k1Ҳ6q/wh=Y ň\{JQ4i&@(Zb. ɆkP'=yy=P gnp+tWQ:ץxB#+prhc uy3 /njO<01j8;/з;3[w[7yVUN;dKr,C9^y5~MKDǃKh֛/%i*OYSpun%dp6.]΄_ާ{⩝Ggt#T:32(PS)x*J=S"DL:F`{4+j-/$U=b'jap1mӒi~uPwo*qrv(7=X /aNN3ln%9 ,-y OjSd&FAqsIJd GtJPuCU}0Rd3N 5[zzW~ԀD$g9 A:r8[V jZ -0%fS*i0\0@DI$`#H1|B_x>ǘR.a$m7CVr6s3OlդyUOh2fE@0L0lS2!qERd0[f9ịU 62Nr6`#*$;A 0Ut4WY_q=[jѮSLK~Y Yr ʮJ}5/>Drʕ*wԀvVrd-rܓӾhA&Z-@D99 , ꔕ)K$91pr`jֱ\`B<UX䣅X*# ZMͭ9l wsaw7_$q0ZRjIot3<13)1a~VvI˶\Kh>S>4IZ?tŤտ9.?ΧmtgC.]ŀ3y[vyye-F^~?$W?^պި3m< " ^_0p]nQ⭻^8ۚW]q8|Sb?ܨ$}^~|O\P*kߋ`kߓ\԰V {raG0^Gʘ}0,4j3C+INZ]h V";8c*uE]44oHI%S aa3llFju0+%GLP$)6X8oZ]!YT,v}5pBa12C>KOЈP.Q?!Bca['Ũ1΁)8cTl%N$B6>$ Z9AY NGidΘ̙-)mJi(@FC$|Cz#vW`գzl~WOQzRN>aH@),'DD%1"2N&oC+5!$ÊV4#xoEOi%ϗ&y"NFX)8Z/c dZjp[TRg d2V4U+6@/wad8޳x\1Ɯ9Yn REc. sic8 fj0}˪E n48$ehQRFv]#q{"җ≁NwM #N"!$N A1:j'[B+^;hN6lQk&Z1~1sM=xտ>+{Zo:w>;!? A,XN:~|u~;^ӡ_95~yp&}un;da1\1Dv:t}eJˠe94E!#-(t! dNojfz=H&5߃i嗷v;) O`|8'  hNzqϜw洏LzK(KB. z]wo#P.Lzp2gpoLC/ೇ?4p֭9Mv^'#eӿ%_G3)o$?n5Kƺu4\^bHB]r~B-9z>[;h7yjKQJ݁OT֚2|pS87rR7Ί PU|eC.B0r~}տGz?^»4UX09ⴶ]/`e^?e[V>x >R|#G*Z1hZѪs] ɻ^u8;jhF/2lt"v}.8_F;R$0uQ(lcZpގk%ʎ;& ,jh-YgA}0c4ۤ L0gn qi$`HD4ai<,FM?x־jeAJ(?{ײ׍dĸl<@B L̢=;G6)RCRm_?'"DݶĴu'O[َV?zuar襔bi1l:W(yVzĶ2'@<}>jGj 1#i:g479!NJwrkyoPV!=69#IkW[OuE]x\u>U('&iݝM rUoIM+ܭ?MZߜmC-#OL_74cYNnV $C5:hF>1[(6:!x:'ˬuٹ >( /[7&u϶ͳ-sPakfy8mu>kBfJϕ%jp>d_H;$ݷ黬mv.,^ekth}S+bT~jcs7MpGiMX6-}2AnZ7-s8cQZ,t%h9td ]GDW+kӱЕVPtGDW8|4J|t'HWmJ|zxԕx,t-]?չHWCtDt'Tkѕ9DӕQ U.)<μ͇ɛ>?wwnjm,gHnO˽59wg!? rGwv/Ovm\^/^~p*#yq_vtͣ=W#o߁3"¥&Wُ[ŭl'W7|듫xa~|1$U4fc5n2U3fcwlW:>" l9lZáAyCvv{g>do_aMnei]Y\E}텿I1.x ݖbvY6Wwsn%6vM+/J;0{=Whߺ7ҺP}c}~{oOidw7'ě~zokOɩ8j1(@D4ry3w?YFaqQ?6g4_f|3W9[+]u?=zxr+޻sȾ`6VYjkGbOxy/?c[9Gڛu\k5C̦訋ynRnیvƞi )K'o<ِ!Cݝzה]-,ozɢ miޛҍZNNL{g;Km`ߋ2[he D(jmlkoluS/Imd_r1UѢic]<.Kszu>&Qw3 @W(00,acs  ǀnEhĜŌ9{Zl<[-S>m̔}|G_eS:cpfƓ&gk*0[~B̵ t4FUi]zy;aYg/ Er L&XJ@`ٳ;<쀶 )Cs8҄~,+i0`(Q*TGPykҔdp/M^m {AhclMslu ,Vд0ygD_Dnvg18~p`T# X2!!)ٰYjCEhN}"{2:4:2km ,~9 v[ʡнdxc,j@F.3 ʄ F'5GHb*Tf4}䛌+ "Xޏܡ cz V R uV;G 2F/[YQ5z Ȅ5HT8iFМi"3*Z]A.ŘU cp 4Z˒PI!01 /A, nL>O_ [betڛ:7hD]`#)#-EU#GiՖ%@Q 2"}>@PS(Ha[]ŊTE #q߫.RDz8+2ȫk6rAB}R[%b@ࠄnMYֻ"Id]v`E·{ݘxa^^]noE7zlۺ L0[hG1>"*&QP4dY>SMt>xZRUf/!Et<"rBZ/Z,J芸P^ 4I6iy](CJyFHs؀oݬ'p6M?&'"8$h[r F\,{yI^N QdIaL ,j12@z()Fr"FXz) U @.{b1յNlӧ0l ;jhYȀkլU5}V( yf2- &s@ u>rגry9(}:NPC|Y.X~`4&5aH_G1j 0H5Kdxz^\\Ӻq~7ֲAuW:9Asj8.mC6 ȢKmnەٲp{>z7|T/oNL|@n>X@C J{I=Eȴ]R> H}@R> H}@R> H}@R> H}@R> H}@R> }> ld#>f> H}@R> H}@R> H}@R> H}@R> H}@R> H}@RSaP:&3p3h}@@H}@ODȑQ}@R> H}@R> H}@R> H}@R> H}@R> H}@R>Ba>&P -}@@Y}@Oҍ> H}@R> H}@R> H}@R> H}@R> H}@R> H}@z2>w'\X'_t.KM宷ݥチncݪv, B m)YH-sGc[Z⃷-ej[z >rc:mMs81NW/vU7_]}h?3]}>/]}?]}J:4AW^]oCw%b rq.A,#vW濤ߋ mcX /OFx'8zv3eW|FC}[lr,y#V7;6uɋ.y{7oǫu?_۹/w?;PC2g_O~Nkmض0.p[4P{N CO&9| c5H{SMےMJ ٠pk֞=`w,{9;?p]zP̍qM$œrRnEg(L+4Oʫ9O}0#ihFT2.u9KڼV/xypw LK-7 ŷŊ:J6xofγv;֔=AGY%7iweGZ CKVCRmj^uP1Í.WR_ *vBtA%xDWjo 2}+D QrU e ]!`]ލ/th%m=]!J&{ ]IK5 Q&tR[SW;Vt#ZcNWUwJ+fJcAu7GzZ]r;{% UGx1 JQg|\=Birnޡ!eCBD (,?߆g]-G<$7`Mx9=N31j=G^A)NܽitLQr3"pɇx9,9;(3N63`m2u ?42"KzTȘ(krŗI < F^hqR\po^.T6#!<EdUh4 q(K&>feak#ZaB2ap`v0b* 䞊eP %n/͉p9WI[Үj+צZ$Z 'ihab k0>Bqo"9˽hn{$(e?HXKOtQgyDWV Q|+Qŀj89]i'^>X<љP*.5JtoS"'VBBWҶtAC^O sa+JBWV֫+DiIOW+N#Zio A}+@kj;]!Jz ] JI]`k+< _ rvBtAT2f<+| v$]1^+'p5׾5 7=]u4Rw"Ld@Դ%=]uڴOD ^CSl炿69x({,*.:s`J2_tǢ*T,/7#ɗԃ˙/ъև9R^7tP7X1]Y<ޟ05<m+DyOW_ ]ɊU/jfDe'QT8]C)[+#kЕjߪS=+|J+-h׶Զ ]!`++-m+Dٶz: ]qn5{CWzCWV0vBtA,ޭ45EWן+DM QJUJ O +µ]!Jګ. Gt]!\}+DT Q}Wݤ+#DiAlpr:J F0Dr/>j9L.P `rs^]z{6NB%5"C382]Dȼ= y3}r#›g'Q^<ף\LYX*7O}O~{f2LPQZVXZjx"^0A O|q:L[@i306R?kUy>aaߨR9KO}dNxwGn@.GK4tqpR:/ɕ73vZMoҵOAY_ /go2k`&kLelw)mvmLpc6EIc)4xW@gcl#A:]#Mx,9K.`Q]MnZXɘGZ KCambcVukAF#εs5m(nayݤ39S;3xX|Yw劖ǨxZJ˖<G)NTT?>I5}ƜfvCqΫd7,'r^*_;NB[J@X-\nVR*Ԛ'PFQ% c`cK *KwjTyU`U)Qڊi.8ϫ]+ =+2f&0 O俗y,؃4vnR;[+M?!YMX ҘY.WNn-t& /m=7ͨVMgnQ7ûRh4㥝`oeZn5.,.Oe 37nf+ : ~k:5^2xa/uoPhͤUpR7xofۆ or`྅*d8ךjvBtЕXj@lOy=4tUi`UTOWV=UT3]QňBWqvBm[BQ)ɴOtOK_ >uz8]!JEz ]qeH jxCWy "ZnNWR؞:HWBcGtG}WԇCZ "ܺCWA,7tp7 mW;4}UJ8nOW;Ut(骋tԢձ֐!WÈ2ʮj0Z.j _ȹ"L#IxϪӾV~ˤ=LRAHMB"4V1m,*eg9i|)݀p72@)KǸ;Xb2O =孧+D)N.ҕ+kg Ѫ#ʾ++]?`O}\=^ W:mC]鞮zjqO:"¥B\P=]u8!%Buhyҕ \)]`͈7tpm+D+I!#B'?\M0hh;]!J]+Ef>X1o oADk[OWҴmUOWG+M1Jst&>DtERJ./!C* 4@%ݽF xs$Ϙ_uAu̗5߃ѭcQ[o[ t!+9>tp--O-I,1~3tfj(3$.DrGrR>`$gVzDWX{4R ]!ZISz_]UoQ&`nNB-U=D[CZ6Ԡ+վUOWF{DWXk Z ]Zsaz(vAOWG+ƴ#BxCWAtham+DIyOW+ΉlAU-Jo JoխWW4qЕjԕ ]!\Mt(骃t%֧vN^=›`vBtAR[l7tA ]!Z`Q~dtbZzDW3BV=]uw'CBXez炿6L<)SqIc}>OAZIM5>h J"7Yzpcm"+>2oz-2OG0r5J?GE8*囔 ڞcw OsaY=[0yj~+_x9Oz$Op͛/<z%^yxksE2wZ;5W~G7k?w XFˋj;o[ x]d [ pC 1-}K^Q۬՝Wybl@r?^E_9…>&Z Hvwq1Qč4!"N(GhD3;-֦H2R+cSǢE~r0/(7qx \Jo#0t:7MF p,˓xk徾p+^Ӽ͂{` ^S=dt(+-C,MkJiPVX!1R8Mn@R1K0I2 J2)SIqIzIER<Ʈ Krz $j :emz²+O2SLZKQ.v"ȀIh!LEPa2QV.YƬsՁ2KI2J\J2)cik3AU&2%LIhL1ƱVy'sG\nW-"JᎢ"UA5ъ vK%?v͔ C]ȚUg<ԲUE (->zVp4{R;jF>R4*QN fRG/1O$1"A$z:?OlM\Mq?c}zedis¥*IBŘPIQ^>T нJ^X&˨ljEq0Eq:M"c2Ғhk;Yl)I٨e<&L qjNT[.(EpCTBLe6,@zC$?O?if90:|3zMvj;#JşMS٣NDr" v$,ex晓""0M#gRd&D7T$MvIi{0KA}h$8gJF |$:tCY %MGy܉ 5Zr תE,~ΌQҔN?.=۹DZЦ],vNGk3:j]O2MJ7<\~XοFJ(4NآUd%VXH3KBXh"2I 7CIHLHtHLHL+BԈmQI 7kAn&!n%DNA&jiLˌMU6M1qJebj`0f\ۿ"/{q~ȇlh^4- 0C2V#K$'u{8d[#M䱭8 Cy>'dh "\NYuP&{9 6d F%$!g5XdQ%Iڢ00*DEB.8*%IzBꖐ}Ȃ %\+%x&b)2H:TD)Ąq&U õA&n֜:_*?]zsrEmYс4;\/-{`̞)N#rSAf?a6}n'ϥAP-M8'EZFrj8A!Iъsf܏fsdGqdY(@J`=u@H.@% dU=ӌEqꃞ͛dQ㪌l_dӪ {EQX\u=DdZ!J ~5,\N*2Be1px<~w/}]wO)3/(u'0@ t7_~iZ׫FYZv۬krúGG* !,ekI@ ~8|;AHz99[|ٲA8 B|Y)Ju*!Sl1亍FM|DkT3XDΨ3ӡf: MI:d&i@}][YD#K48%>$P_u:יѷN:ÉkӰ~yB;#G,!jƦsyCj%gӉ">_ZVrrc_+멕l5[k2wӏ|%ŒUd)Y49Z1¡gL2z Հ;| g5-mx1C̼ء+չ훳ދٗW >&pJP-KgUGR}P TJ~oѻE-$|F[ 6Y \jP/q"T`!$Zv"R?<)EKY,9ͣ5Gzx޻rދ}m-pX`M'tms-ƴ*ڽg  $`Z6p\HmqΞ8ׁHC1h(B m#9 R,f0gj>t wX\18( ʪJ[V>rS`6$%G+Z9i>j96eӦm5u&"QZq! PޖEK/{jUf:Sؑp}ty0U*SN/i8V<-Ö>;Rf 4>rW-s)>wuɬ92]xAz/Ͼ^ҋgO:U*E >E HeШ'ύH9rT>;":b L+`< Q2D͕}&)9$h&\` t=e 8>ƞ}Rպzϫ<RΧO곪)ZPdgU#LX4s3_(aU!JFH33/:p!~/%TxJ$)8O#ek9Q*I H%V$XީJנLu%*ye"h%2Y'NiN!+[ <,oW(_ !og@PLiWhm浏L.TƧ~|%ʏvO]$Kyy, O)mD01 (_Rꕖ]I9+ 28())3=܎(gc@\I$ KWhܡ VxG-! vHr\3Δ%9R3 KYf֖p49wĨR>rn;Ygത>>2}[5>-!`uįkT YpNGQ9Tz#`,p,AՊ'NI!ΌW~lVV {hk C*>ydɄIyec+UT-z;{W|H!obB*J U Y8Bhl΅9M8:auNyujػ56b:On삶Nz)2ms.HKXYf7. tA.ȱ$E:C=nӧ6EL1]ioI+v]RG0vw1=ƴ6C,]ۄ,2eʙ)ӫGs4Ɩ9ZuҔ gp8 L 7ߠKe~Pƚڐ(i'>H&ժeɼXuyg%&gOa<S]b;/i|l_R.yv3rU\ѡN}sF@ Ww/$m\%}ӏ5h޽SI/jdTRY.=g{pG36.g,g?̓;@;(VŸ1Of=:/PٜI2{âT- Rw_sâ{\gO?3/oF/X }}>ss;Mg{I_0xOYY.d]pJMt> i|~2@<̭?=0cύ{P`[oxxCt( V@<纊^fDxAX6a3t2tAv #Xa0H"J96qP-eʃJ9&5f]مiyAOk嘒Lc'g"ǎ# t[|ݭ)Mlەv7h!U21`"{~!D2>0@@3PIB2RB0R\C[zדּ^JtLB!G,$*fX.pn SlÇH5,*r<*PPW`TϣFm1LYPɓ'kBT6IB1QD/}c-ұI hk'`k묭TD0_8ꅶFayarDCa@s`UwV SJE<Q> I&y[€;b901L2sh5gq"+KI8`w1>\hcɦmʼn{[ 4=ْHcF'r 3&=hV+o-h Ȼ^OƢCi1sr6F * +HcpU&䩇H4dN6%U"Do<(BE)e(%#c ǫI%mMe!ȪD֐B'!b!j0,k+[WWΥ{Kk~|';޹L;R`JUべŠ,4.`zsI/BgmEV|XU͊Tz7mр BZOƘ6 ly<ˎo;6w{?co|@4 nCR殔qY1.{6:-EGN-2L;2KN5Qҏ<7dR8o}FMcp6ڴE{#VO([ v7{8_MzIxwxY諓co:B-:W͈^1@y;1U*f @u3y?lHhHbFAd>z#i#I]J-VtvT]sqL(_z_uqE,,'x͇b7ضͅ汀rQhz"|iPʣߦ+cS+9 ^rsهяuC?_GF㏣5lry;r`5"~6ܜaZ Jj6P_V{ԓSioR5iR&+̤"꘎iaNP5+9M|4QXx8 +db)omæZ+Xݵ}.Y.i0 m R2M6C1Nd- ZIDkôZ+q߁ŵ>~Nw}HُuPُYTGǣti*ajD+2EeESJ#=P.xGUuxχ9Tfj/ m(L )crn$STg.P "9+s5B&^v1m1ʜӻ߾+߯Urr\"/A;^" (1B.J+Q9oHWȹ :z &q&8\컁'fcO)wa( v8zVp p[4հeV`Fя?-3 }/"y;fIw5#QjS^f=t74}(⋇Ayur^YϣP8a0}w=o^y\ܿSh]ן~|QסD>ȮkJ.U/MHK»Iҽ atY5̬R  ]̕ ه4zL񇴬8l&DD"ZiP8N3pD t/ Al٥Tluwr' \?/FJdNC XUmW誠T8l9;g3Sl]m Qj;Ֆ(y9ۜftsk]` 3tU**hm OWZƱ+LH]ujdv*(9tut%Qh`t,pmgHۡ4- tm PIUk+tUbcWJ ;DWX2*pyg誠tUP0.i^оtUrѾAXStU ]Չ~tUP_}˹&WX2_>BO8#lPnLO3{?;X"3B|>مkE6Ռ2]bғvq;cfѝ]۫ _i@]B ~Z}yvzw% 6.$Yv8\ϯIs?ˎ+[~ܦB3/DZW_f"=]i>Ftg7^I[nn2Kyvm/nVU`em_^\o=YڌyLtGAN]d9Z卢kQ.A^],o[QńMSӇqx_}sVR1'{Ǝ*njKvU*aR{*yIvpi،)J&)_N= )Q)Т9`03}x *[˔8$RA@Zcr zZXM:~3[wiFߚfq6>67K1ͦŚADkicc߮:j6m]>&~e_=x1:h67B7q2L|qsknutS2oNG"׻r WRqVCjMC4r+(Ab**QsN]R:dTmB&xsFvCuχgQޙrV[g!(d!(QN-+΢ѢƓbȬWxT!90D VŻq֭;{l]0T#JGWg[IhJdEBI!g);O˄da@W|<%O QmD^[,*m#ҊC̩&!}%0ŕ3Sϣ&r:Itubnd^i^i_i}W3|Mۓd+7a 8UVI)ӆXeTt)*| BvI0H]c˫'K}dq6ap|z{[)oM{O+}xPi RI)AbsvMFQ3`C ՜A^TD6}i=U#Tu&)Ib*v*#~;ry<ttv|&MAe|٪R/&g˫K e:VxsлrotiWWjZNۭ}~ٞSf{әz9Il3k />Dpu*뾴vN|sU U߀'$%ݓMC _)%_ JxJC%rSR77%şPJSRW!d )B-:#h648MA6mz}YU*MC$D U-!('Jp8ѝujeq:2kc孁MA.b!Y6Pd`O\"~zuq 'Ct? R%)H-e DJ"-a*@¢EL"#H)߲P0\S.*Gb~m)c֡Ƭ5r2)$lRkn.1&rzy0V1ZVhtJUTe ,$:t‹8J#rjdJ"Wi al#GX;gۥZ`|pUqMaz׎Na@Q4TtjeUNx1 حG O% x`_~0o2;?"s1 &:fK-ch_hMOI_@@td Pp#fq mMzNt) c dsTJ EA)/]ZWb(%XR 3@u{r ?H5c]@:QrP&b24*$V ׎`9XN<`C-2YtrѧT@TUb(fcmkkJE-e4ʿ.S,qfb`2SˣIt|{(<3)0gѦ{`\x}wBV<8vY;,F'CDŸ!qC 8F?.xS>y{vA#`iϠϞQ#P:(F={Oi ꐹUc撇MuJ'C[Y}O^FE2]E HH+Eⷜ"#ˉx3/F1Yt.!aLY_$o_f?ܶ KG,b"_/8y)zq{rﮗܒABcol/'ŋ0/k"nNYkqTGϥ~;O-em^L)JYކ_(Ķ24KY[-ќ2og2"}ωk3f ð_N{tTЉKC5l o D7tONWk 9@7},ߝ]VݘÛYDw֮F­78C=iq+D|+z-Ɂ68aZVG6=Z.tΖr\S Bn-ysr`W~75{?4+n}zvv -on9nOȃ|vo j+¾~Q~[-tt΁>X\LZN#MZx%o{Lg E=:@V4$H$3)IaT?iX/W]u"{@GuDESDl(,U; D,Qg}iq?=&:8Ze&Uf?*JoضU#(IudK~1IJq:dcûżSJ9#^y'Di|5cq&aYwړ:ZLvjEToOl5^q%h)ZL&C T$?b f7;#wR\hJ)`m*j:Frɷ:gtΞ9R  psy:}eةqB6ch,}R;p:j!$`Md RfWm ep4Yg$ Ac O$Dr4^٣*{Gd)[%ӸIr:tɁ)s=݃fwSZ^ari}g[y]ᖚy6gɚ;1Vc4(@1j 5#&^L)f,B޲ v^ɑ6݉2O"RSR-C(6ORb% xmABR[wMtnq(\F.<*)Kw€Ϝ\|YM}ӋOb3vJ0 sB>㨫3GVES(m 9{%*Xhz VTjUݺ8cYr1OfE^q(kcgƑG`q(1aֈDsX] ^VlȂUBW>cI.!2l9[F*{F./Q8J6T+J{ӭ;ʩo!Oǡ:3bqdć, RnNqEkBNI[昃07p΁}[V#oa 6I%krOڈrKp̈ݺG%w,: /. N)J8,8,fɎ)BMׅB*E_k*>qvɛJ'h/8s,y Sc/%Ƚ?'\ ڇ+A(VI{$ُϔjcUsv%M~{79F䘦27pv_sQa/J|-@!-L2.kK.&T`cK2(PzL-ɾ{x2{{2 хyZJ=C#> UXg&HU 4@$fbPZћ6^$5VDg;SٍHa:IۃIii]讴LU{!m4ٰU]Bfrg< PA/CZ)C:  B1s:'^ɟ=IԗׄHZ8C S*L :GdEt )hԢ;b-b!k3wGUa 2~N)ؑ/s^J~`~ YE |$˜M1Q Q6FECLh͹vFK䪊.CZZHx`$iLAdjso/[w^-Z٦;!0;u:4eWqGj>߈?{OD2Lx@oEnxql+W=c;8q`HvwuOUuUuGM?vנ^YR[-` %zS+!BGTmNkQKbQp$.qp˗11(iNaxG%fYs9e9 NdBc eE+C>qGpyz ɻͤ:j%=S]4@y5cjgÐRYNJ4JbjE.zeLEb0XkBЁ3ʊ +;Mfc=<XQRIq^8! I=2YE%uF;(C8O['"6;9ǣz1DPyr#M*j<(?&`218'6F9abR֨~iXX/ 4цM@H(j[BǴ7%y &tuhȧ#!̾s6c( 2@z#NXH ֱ[UH(^' A P1:5 /B+^;hNw !"MIY9k>A=&{{.z"?!C,XȎe>O{et26ŇK4_s!#Y\1Dv<}yJˠe9T].GZP',B?e/D7 3ΒIC'޿W| 4.Iz@ªɣ:ߒ hM\>/NㅓRX,|p}0@xХhw+xz+f٫zQ88cxi_s4;}ɀ/ HiSR-$?~=OC֟~h~;FLDUJ*V !n;;VD΂r[ xϖBv6)=&B ĖFKw^0G&ˀ4Ƙ qBҊA{g;i+>H:KoA:`z!8$Z%(P9bJfy3A߻`.SXBueJZWZWkBKD c^戺ЊFh2Wqr͵ȍaBYhݥHW]Hw+v[wHw+XBv8ՠܮ UjQz VZA 4&9E D, @ۍFºIJlB O8FG)/kKȳ2!Ă퍏+N& H)*^'.48͵z)).ɾ0|jee0QFLSdv((ă0HA)L3`ga fE-6`M@ZX.*\F.!}w\A' pVWaq؝G& ""gy66#v vyD*\=ʩRjNV`zƹb&&ΊS8Bx)JQvaMUk0SV.^gՃU*0JM/aON˽z+@a7<9<59Cڙ4735MCaiV]HA_QFe[:Gyyq ԗY Z?kȦY #|1kjIs@FN1s%K3lT1fP8?,3::|Uo~}ջcL{N̂Ke@ v72~jFښ櫧m35osՆo3js ^%ɖPhnqDi gb_.y~kŭGlYOH f3_^-I"0钁3_21o EiFz}'E$bm0.F$SI6(j%Bl*IopY^^P}2: a1 8V'NFr먻Cs^&Eh^v>o)_ a2(S)x*BZ$#JIĤc0cd\9oTo%};sϪ.'fG̵Rl,v7hgMm|Ls3} Kn"S [cI"!tEji51 sGtNR*%cX!6 Dqݪ`D9DŽRn&x92JR?@^c%rbrX+?B"Zg`t 퓄n'!5.i«dQ{}!hF{BP}`'.#A جC(!Yޚwb=O.B.ߣ"|/`< Mv a3)@QDLD;O: irdW|uZ*ӽ6vYmv]gU5mR/=YE4&&㩣qER(*{U˯IXOP5|2̷dod%+l/c/ddh>Yֺ>Se%qϓowxRh~V>~۷`袊IM!gjVDе#C ]PoR[̤^OYsEA@R;ettG .>4ڍ!D!e!bj3j2b=6tXj4BZ"ZV[gKCWɛ 伈lP,KIE7{[kоŀ ĕEm51>_NgŘUktcQK@,&mK}PX֥]W.ze 'nW˯q7۸î+#/;Q<yz|Hͅ+A.y)q_0pej+^@zuml[9}Sn:S}KPXW4Δ0DA`1%&p%z,&nJ(5)4%J0#]%p z4*AK辳%ٕЄū_ W?(GEBWB\Y+_y0Ci~4feW ss XIGD2۷_\|IX tϦfB?p0 e6@R.:07'CnpQfN`xMܓ;-F{Z t4B壼D\qXt'w[q &cxoWWtaX`0K)ә}f>U5C}}<ĕOLrJ͙"gzJ_19ꪮe /v>mA_ȒF,E"i⡨:.S/Fd@V+ە]e,攦&tL„SiT0VEl'LbhjML7iն ,'Ņa.Q=五fOzr/CJr0'ɻfn \zPip$}nz WOBiRҞ p3\m}<9$/;\uq \uiw]-=\u)gzpT`t`ઋPKaK% Kb,j.:]-B׽tOGlKFNҵq}x{ѯ-pW'J',z!"\K牠ou9YCYͲp!p^?~fz⸧s_-:Dqd\(l&<$b% Xk4lK dxWLj,u`O'g yWUo^\׺is]m㟎/)^F}y' 8YY^WN?[(CJ!{㡊g6Yj V{jHWc 4[(>&?I^_?%'۟˚뚋?C^xtn?}t+0}bcs 2߭c/Mz-\sQ <߬x}bv;Xs,rKZTkMWw;e/k lu>1 V(C+63ZϡE wY/PB.Fn\c&%5ٰ>@bű5!q&Гn .~y=@ܳiyܣɳ慏q..U **,\KtٹTPRɁ$cl*B(`ܑ+(YʦsaK/j-`2Uk|X=Īf@0qͺ*X=60ԻU0␲ ->Eof~ak/AZa[s TjOl~8v;YuaPm4>LLjwv&eR.PӗHGc% ,/" l<̀GQiP8hRj*'f6!gK+shc\hM 00q^-amْkAm1eU]l3ldP1r4VDcجmb(%2e.m|TЄ{?GMpr5J=bK$@H`RIZuFv6]˞c*ptrٵx"؊d9S* s*jtd4?1 3B>ƣI]irK]dgIy߷S%or?8QyW#wgN$@gq3s`s`i>kP`7w_($ƒ'/ۈDIK$0))0PegaJtOr5ltbr%нDu4xhdl_tO=y CIMU':6L%jlTs@H&[gV_e14 qRsӏ XP);qwuօ{bG> w!͖;rkYlgoؘ\=AMm`̥Wd,QΊ+5 vMcUUx6Be|((BP+Su&ݮaig ;0#(^ޛ¼:UmX)!Sr$5s ]=}2&0>-o+N&jJZ 1􍚉+TZh LQ|yLb|TdC㻮f|u#kG6Qok6FD6?YQ8*ꛤ,k UL1*fggdlʧg:iȦց_bEXs<~N }˴xt[&} "ѳPndgJ6,w!w\c:朒MJ[ 2yu/(zO:W*ޝn>2{˳YSIB͎r+QluIjӭYJ@1֓ /qULo Y졻u{>}\(꫾}/zm=yuGJ]^!/OjNyrd5~vlWw}]u[ 7(l:w&Ms]^>bm[^bWH>c[N'^csQ=We4RVpG#9qh8qL߽?oL\)G<Ʈw闷.x\lJk8Zd2{qU wU (S,f֛zxSs(F')A&Z/1*6nkzjLHJ%u䘒XjRUƖ(|Os|3L5<{h[䙎-/||zSlu!69-Nr>)9/l41TKoCFKIk@@? z&R R#d+&5G.>QI&YE]zXE96'Ǧ\:;=Z%Z,`V7w˅s S/ޜ`>I8Q?ߦ51.I G5upa4FU@)f. U,K;]»-Ķ"R2·P󡊓qiҋj aZ.ZKhd&SqaX8L36B4cNV~o;3^}uqqjxŧ||)~<>;6N~jR-T~Vm3J0S7]p 1tg/0ʒAM9{L nۙ P*o6,k0bqN'eܛĿQڱ)j`g\X}5 ni"Kr[PCW/8 .d3PuI%WoՐzҀ"[=w0"go.uR\kbôdS\\d,t]-T%"7=EB*/(:Rb!gqq7cC-֦T3nQ*j P&،o|em{/{2XGyw(9z5E{W@bb.9u./Ir eߏ${AO]ÓA^ۓy.0h<쬵H9?#gkN}}BČaRlNijPaBg$L8 RHc}tҥzR'OdTz]Dw;d,Axzx,m_0ҙR=C; 8R^N7bvKYsi(~XڤޤV2 ̣m_*O|5e۫mͤʿZlgLL!r̪/H>}i:;;A@Ɣs#T -:L^fs =8M{jT C95Z@jl-,2tk5ZV}Ypbvk1(qc7l2l `{}(q?mT@=,䶘OV/.M߷>Yp֧|(&YN?LK:ۅx:;'ub^a3Oᗓ_F|mL4a-~Jl((=unvD@7e<0JlLbJJE#WW"a N 'm"r}/^ЛA#|Zߩv8N;qB#]S:kE#͖ۙ=ӄ%Tۋπ?[Ҭv̬V~8vO~H2LS%sFKWiGT 킲&h8FHp3"ا= }HJ8 %8bvDðqMks=Ϥd2aR!3G!sŽ2vR-wJARM*AR"h:Kn=BJP-SKTiD#D$blHfOrV R 8<;UJ繎Z;ݵ":"!qXP®@1( F-D)QJ"&#p=FVV#?(WSqk,F]^Ck]U%gmk's 74Y|N(7=ؗ0YOj9-r*4gKr ,ӭVsFߕVZw((s;p9Ia4 pN& ETh$-kvN~ݪޝT1BY[D-ALN \3яtb@fB9UiZ1 ix43h V. IB b$ypԁ#V G>܇6&?y>~*byYJq]C)5d|:'Wޡ[˘[Aaa(RƑSIm 㝥y87Nc˔\kM̨SlrD`p'5]l1pr#`05qx_m ޺|ui)z,GavUq߾;$X&3OTU(s6֮4Gjt_ c *^P])+SHIsbpF lL嵃(:$-LwZ Z.Ⱦ2F뱉 RѰ8jeb4tÛ+aŘI@.5C\x+6r-h $Ahijt`Ws]?Χ]3zM׵wzг*^F8l0B!)U:5orAӸ/g95%7,^ZrGZ~z;Yl>wFmTM,4V7'ݜRA])~1ʿ44j: &95Bh-w Q5uwmRd_:-,iLP$)̄06)ݷ`W.JH0ђpyiN`Jdۙ(*ei}ױ^jY?$CwwG4"K!}"Dhc,y~뤱VQxX 9 G"p삊-E EHgr= % ? a@dtkFV`89˜ 2{Vr=| m$ro' w՛Oi;<8Iíz{z=D\YNr1aH@),'DD%1"2N&5!eI-i SߒZRFy"NX)8Z/c@dRpv[TRg h24p@/K%Z)ǃz1DPyr#M*j<(?&`218'6F9abRm~nXoYEThBpHu@H(jZ\|6!y "tuh(9WFSETyDZ[&HoD#yHK*m1j=!-rddtq EK$hj>fRr83hk鮁 DT:Y>>{G8ۆIXW3J|tRb&y$U5 =Jy>xY/5^#2g|E3jG&𩭍 N& ˁΏ\5oykr뇮j~lhaj FT %16F* ta˹6kӁ)W9b|ÃrAĶ+ϹbE#E RrHd 2%=KI%`D0AP0@2qU^wS|,TtC?Aih-fHQyx{*7qR3ۀ|Db% V%]~vYF!H!$E "I`%X1QE\H"V }$kΝ%9,V`Qd>&mp\}YM8B63 i`*. ?I&"B8Igϰ00 @I!uJ0:7.lYUSCR\">7*(bGhQC4Jv`@2 (tpNSw`Fpf{1yVUdE-Nuʃ5iatH\/`C0OȄAIXv痁I6H4هp;TWPNgn.Tj)5gtn^0e5qqXH^ }-M'|,To-ipQ8 B0gK{ݓR= 4rxjsʙ4!u0Q4]HRAAEѣݳ~q ԗY Z?kȺYk #|6kY T8&p|=4t 4,Y~e A9kp&OKq9sS8w?|sz}׿:~cLO^؁qp@ \?n>5#MM USS67r7W5ydsT(]7Qm|y/#Y/*#]̹'kR|rpW]$L:'`$D X:q뗁Y&n'zRK,VbDBI=%(iL!J BZP|'ݽ"yy=_P=wǠnp+t,JUu%Hd<rR/orVn%[p7 c2RiLp o˓-eAbPA Jm0Swʎ<ۖMf5*hCghNfug;eoENjꇪZ2ea4% ;|:OwYjvvY>4fs= ɎZ2=O/$? r3 JNӒ=relU\J .єl9{8߯m p,kZH4euWc;X?:=_kfӹ kWF!Կ&/f03%Y0IBsAkWXD=ʟأv_'ɗ9ʝJɟMK^|;Jq AH~[zpu?s|pu?k#{Yi4QDB W[tnNrzJi-ۣp{`U/ :zV>{?p/x2:%{qeؼlxTX1|::>>(v]&v6el3ݠ3ao;=F"g;KO/Oжx<[|]f -ۮ86@Nmwb4棳߯?9ĤUD_N}qޱ.M/w/ϮoxV9ማqn%:\\S :#Y0掹-#i:ürϾ>TT.Z54͋b~_};uJV;As&Z_kHP"j-%J`iL (7[t 69]K:myqrI/fm.IiR ojdiu՛`4 Bki{,t YLf cSIx3$fNYVb6YT -ŻWpGK8 FݿWzki,$)!mwy%d6JÛTwLaaf]| \1ИUVCx6;_ /D _a-.Fl<0hqryYl;*9C'Dɐu U;:7GM}VeEM8Ubyk%✨H%9ʂM9dJL~ :U2Д0iz7W(c&R 6GHH I?ҲcvP*NAs_k,tROYG}hhUcPli_B)F|^$\, Ht=+l )D!; x{$}.%ɆaJD|U id rJcO-OX?, **T(:Y-wZ 9kxA9]Ye~R7 (QMBWKAmen±D6V:ƌD$Ѡ+.tlhXoT+TR&S]E\֕2 FU5dz$i}֭HLIEATU_w@AkAt&r3۾?}+G-J67[U 1 "VVa2,3*"e  FG4 H2R}2ʦ«2 $_#XYUD25Vj$=5(!Ȯ}c ,R u՜ 5e 2o-!0S!$}$P6yЌt4gJdrS ukJ#A98&`&AB}1Ts=c@YC8Q%@ qc-~AAN&:4_ J-)Q;)ӼeG'g^ 2!}UPSPzP]T콠V TuAdE8hpZBR^U 5DDjyT"Z8J(46B j| A$uH*.r˨ZQ jT'7q\Ƞ$m 6{pЯ544_#BT6ztbB 6"dUs7kskM>m:@G~U2&u`3GWEK*d`>uLAC!'qcGe&5%t_8gPԒ"4D(2ӊAU ׆2az 16= EDG%!YIF VH!38hG_MƢ*Yȩ ՏDE}I_żd٠.I|ǍU6}tn<[W!O*s`I%̾~6B@FB|u)}LAjy1i"!9uD!QZ651E =f ns15I(D׎n<RԟҠ֨H!6vnj5]QjF,F,npZ[25 $=2Б%9ص7Fu& 3%)J!B2'y ZA &(o*"*jQcQy(T%D(AV ?KF4bl('r(d Y-_z_!gѝk(ogӤe0o R FҸ㠝 ˣYn6ܦ1$Z T4"fU::hѣйk0aMi ~;,,FZ#Z(ΣDjCKDm1&:Ѱ[N*3x(tR"X-TP=ʠ CBz D2PAzz=l ' "($'M2P4n3?E^^^0b0 zYQcMƢHȦyadzwuShp?{.]ڈr1 *S{[ɘIq@ZXf=P(-Ag{&hd6Us֎su^y~kPJST`j7l”P[;i* X[TqcE+)L:JNae J ̀|" =ʨ}!1?MДH A8ANZ{*.(=5!FPRₑ4 VA̯:^@\T"-ul,1eX.R&b)옄GPVq*xBN 踒#f LBF弭ʏp]g Ayg 9#'aAA(^՗Av)nZ,vvU"̸B$S i8[]WZw'o|?DhXgSE $(E"}2oC}ԍo>=wSa}R8F kxP_ zHi1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1\9  `sz8 ףp@.c=^V>r9g2oAo}U~h~|{?,[Î%|y(˝,8m\rfߺ:|@!{(8j_8á'u'zb艡'zb艡'zb艡'zb艡'zb艡'zb艡'zb艡'zb艡'zb艡'zb艡'zb艡'-D(v$s})~[NU-z ÚTW2CUN%DB\7m* )~U;V.kEj!/8|w|tz,k*ʀ>;C\o1Gkpt٢5 Cto\x_ncp܅/˫]N5 }x;v')$?wYv!u`]_@~`.r@mг7W~)G9Q^ܼU%Z~O>;l' j,nUQJ)͚jf(୙e"-T5 Ӣ t^Hr;v͇3x]Ώfn r4J,W|>Lѵ;:ߕ/y9]ϖA&۳]Lnvc4Cq\ꓟD Xo<޲-S];h(o{\p mMuj~~||h]ݪ^:?.CH` S>3x TˋF'q|Я|i;=U/VjHxT9|wO})۲zc1?͑k?.OIxGfdNrs޴jl}pFyT^x1g7vys5jٹu])x.;rg֕Yw_nNh5pT0O.>|Qko+3T?SL3T?SL3T?SL3T?SL3T?SL3T?SL3T?SL3T?SL3T?SL3T?SL3T?SL3T3{V`A^ Zݭ OQ1Z j{? Fkܛf^Mƴ 5wMVۆrF9ҴeA YS]@xJVlc/e=i[]aYMŘd)S*tkTGd} [a+yQ^5\OUK*1z\We٥ϟ ]GBoFf䛑oFf䛑oFf䛑oFf䛑oFf䛑oFf䛑oFf䛑oFf䛑oFf䛑oFf䛑oFf䛑oF }ua= [{y_jR,_/ڝk-ժI_A*wmHgiG^3Lb,&dKrg%[-)2eI3Nb )CeOPGri b>CdP39JxM/N\0q|iڨ d4?D[:v!TT aGmF\߆ DůŖ o^> }_W)J?-ϯ-0V4j5%!xQ\bD0!Y) c0Kq_X%&|!+Lν,74^Wd/$HKX'c_abjU8~zCS6{A4}R> J*$|T3*h.qyUinV6hSo&UU\";~|)6̔`4cՓGlpoQʡk >1f x`9 t@ Rɽf,J^P3#gf,gӅ8cW]( B½9E72>]Mb] iG#F͋?P,zI7xibhiB(">&ɔ4О`׈+I2d#h%1βJCA3tBlGML(Jt95<];vں֭n-u"׈cR'ZKrL$ :x,72$JST&2"ǮQׄH@h8INcv>,Ff}X8vC̊X4b1U#ѴՈ8>Z!6&VG)id!M4Hia}TYD"*٢D.{AXEWQ/pmah[uO`;RhM S+IVQ1 ^1䄒V/C/>,;CQ)np*lzIsGn>c𫋩6ܫWb}BkتEcIE]} hډI"8{7q<¨ .^/UG_'iRYl+4Ey[\fo(zC=w'JWC g~o }`>u9\k9_ڨ)hn9>ʜjohy=;˥, KqE hGDL)eI"V:.!gB >^I9;9?w $#HfABA%{k1K{gֆryWQUA4KhH9pь \ -ڊ.3[ϾhzVxm7m`ohY{u\hlmkiҖIQ Q]uhB0P1,@hRiVSU` EjM3/~5x N+'NϋOZ>FHn`BIh%A@nfTZ].񔘍lB^%SJ5ZGsf긂7o}iE5{>gl=4y rA;Bq)ufusXiaV6%|z*$Y<1 bL(p [t?G^Iӊ݉I g3Arhr2&D[C5Eh),j䔈18X&y{bHACh>_˛<(ŗ' ]*UX`%ZADdqm#.Z)4B2@.*%ګbl5m4H%Mt,ؔTbyz' ,?vEF} t@u;b`WA 1G98Z.֕*+51J֌S(9n} 3GT^BqE"v7j`v 喈m!XǢh(1Jh)hcjz%΢6)Q4hibF:ƂB68PZʝ+b43:gA!(!u*X( 6U>tKx}< {q/b1 8n)zAkeM z.炏$D .1>Qoeޚ$CN5! 8$&Iʭs(%T!/uQ)I RYAک(1Zߑ -hRgBR[(CCO73}EZ.ܾlkύ>4 Ӏht{ab` ͼ ywnO1aDM4xG╃/aϋٕPBMdyI@o쇺V(}rw8@vʠ"trENsgUčFnOFcT'gFrVBw1$6q5>'7t28Oz?_ \:P~+W+*ʾ6D/_/N?Z[)͇B'(lU&M BΔ9GgLgZ~\y[_x}3~f8{HQʲ91szr{ d/7:~ͅ!+dߛ$;]6݆5FIo,PCj+W1~vУ{v/_ ϊr~QvǙLOu0<WdO)Wg09xpRIM&';н@v{wӛ{}x{wg7׸W`!g4ţIqGo-X[fFlskY }MGnCN#Հ  w~~ uէnDTFWV (旓QSTULN[TeUAb7C)| 55Riٚ*D <5G)e3 ִ11ijT* 1p=⤧w8T^UKϗcr:‘77W6%tQumgA$q?&8%^$PW:EuFq|su4AKĎAIyrӝ/JC8Ͻ<+̪6t0;*VCf饜8rK<')x2hmUs*E he QedZkuˍ'Gk3&.@r60|p^J)9io .:q8Fy Kl.^9nSe؇/)d "BT9fL$)4b&?1 !JvJ޴[Dx7?}Ր=8r'ke~c٦%+vp 77RsW?QY^̳iAfAÐ@&L$ yWI]x9>` u7Ta|'\~qN3G㨔1EM")w k9%rPտj<a$~۷w8]k$t:tR5Ny,?b~]D j0vwۯjdg2~*^jgn3Goyϝɖv37{h}ع f q"S;^3?w{5Λݨr=TqX^_^p-*4y%wU$Oxa['c000ɜ0)).@(s: F(%edouZݎ-iW#̫^p%۪+_:6kzlځP \.0]Y@>ܻfgWElB3T@H KlӚAmݶߥBcHPhډHVha.if!H瀘GEkЎl˟}ϥϮݏS">v.u -nPdVEʛz=x³:^IB#>>B0hD<𜉀m w,-ZG]™hsmog'̤֌IPܡP% #g9>l-/jL1i݌+Zdy d xg(Q8M)52V"0SMܡLEy(ZP8[Rф`i$ $RĒhNymx:R6FbT21C7RJ༷rMi m.Qȝ^iTa9["io!s<<2wn惄;Fӵt} ݎnLԬt0t~ոWqI׶&r-i|ubOu!~<3:A2Hk>?&Omt>fC麮_8‘Ք[v>nɧ@[̼0r=?dśg<{!N'GNjޗi9'gZGy$TL | ;ǑBpŔh7}sO֜Xcͩyۿ_Z~Rm 3QLݬY `"Y=x6/kl]Okr^6$b㸡i^:=>JW>ÂqncBΓo߇LW!Gm!=_L9(O${9y?7YI,b2IOV9@>p;:oüO?~Hǭee2 'w.J+6 _~W5av|+qv˻$NmZi̥{KKs|Қ7m1CoH'K:?˯Sm/^KsZE0#W+жj>ѬvDu3 eڼʧpt~TXq V )Bk𫛠XɛoEO&6jxzkBȅQ+/}ޖA ZMSMP;^q#?bZ/|dH,EjɚQ.a˛5y̠q6& (6 ztYq0.< n_x6)3nT)ւh}RNt(d-$]J& ڐY",h{= ~FM.䂉H j 1:+jE V#U.%_N2kFxU={u#!; {UΡf(q.MҘQj ݒ=eޤиOT*IW2*mr1CcpIO%[];Vq;I~ rDž^)T DRPKb5n5J߽FBQ%kb?QjhM٬ʝW>Jb֕\~WR(O}z^~F(.oQ*%ĘU,J8L.YKK2Z4QH6Pm\[95(Int$}B3b &%6hCH9 :2=@NzK XB5hTZY/$M" UTr mSR8JAgm,Z38Y+W?1[2Q9 diȎ<:ŭFR Uפ0;t^lGnNE>?' xz{+{ .)R )H/HLt6jTѦQLJǣ4IKwHIM "qlG#J 6nELf$NWh͉{zE*&4R>ǚCLTBUfR90cB %GCaci"|M{ T ~x$1T ET鉎2&1yɖYWL(݉R[vEtW3q}t^šFqtI^*@RX{*qHMvlB*\o՚Z@NcMF!)7\2I d˚Ɏml[ #IH l&i(PIfbK*pv4D̲JiJD$aI!e|eҁfA4EŌuFXmljHRb*:*dA+#@Tɡ,1Do~\9|ըO2f+H LaDh\e^3-_Qz$cӒjɨB:.G:m}8Fw܁ ^>|;1O7E3mK薶"k; /ڿ_W/^ӑ.*#:K)o~J~> vva(8+$-2+4!oi, 8K?ZoًÙ'˹c/XBˍtK:3o-4:K^JtnqUȗu8y=;~U8p~槶b 8;ٌ| &߼a2x1mO*ބ.+~u08Le%\sAv{q6wNZw] AXN 蒛`lɐZqע3|kBeXbQX+B 5Ye\Xt& ^;0ɐk=ybl'UsʣlAPjFWG2^!Y7I &5m1WCJAYH*e&]v.9Vg;l+ }}߭?$"S-@F їRC̩D\TI-kGay aKT.e@$8Hcj)$E)=ks9$uqKLm@­{[3OJ˷g}#k1wqO1k9RkH1M,.4тg jJ_@ާ۽\O^O}ahBR1֮RDgU5BF嚍j#%Og`P*5lII@ _ sR>[Ღ3IŎlPn!p'V|#^s2QtMyɯ٧.g<͟>UnL]F;` &ʂDY0Z( Bi,0ʂj90f0tp ]1ZNWtj ˔p}3ϧr37('߬h&9{B gHshГg߿y9.6I}.ϘFgDPCs;ťsq~t>h,x|H5E$OӓGc~vz\jfsq&7/݋=5O`(;!ťw1/ff $hΧÞKΟnfo٫|wa`% PIO]kcr t/G?osGڜ)u(\| S 8dS$M47U=lQb7h}:6m|ȵA#k@##n Yhd䵨ȍGMrNS`8 ,9p%hAݒcFZr(rP~r`9?9պt(a? Xs zt\mwKWۡ;ҮC_t[tuߡeCW P"7#]!])# uGW QBPJwb++HWCW:; "`: ]Z/t(j J9$+k ]Zzo JT#]#]+V+lp+ q3[n=+k؀ZPV =""F;HWh3Ȁ1Hhp}+FHWHW(NψCR>%qmԏ☦2aQJ%ç[w7@K3&<$dZDG$P4@uZ۬`j@z0-zkPF27xT d`pa0f5t(|l8@ 'ȅlw]i;㈢>i]mҋ9teFKD0~@tE]1\ C+FEEGCR[ JbO ꊮ\nVJwbzԮDW4b0kWHW{HW,r@tŀ ]1\;"lpbFC2ΣvEу+kjgǑK%CW wɻۡkWҎ>^Rigڲ3h崷x-)T>{cmf~(f̆sX1־;Y-ؔewe96Xd-oP0 p`,9FkD3Zr{h9Oѐy' h+b-ؾ4c+!2B BvhaGzۡ4=ӮteGK@Ճ+0b^r=+%n@tŀa8tp`bQz;ҕ9 b~8tEp0C+FmQj @?$c; +ۡ5]1J #]!]mjm ]1\=bV%ꑮZd;x]w< -wbjt#]jjg v%w\x;r dKRN  :Z^:kxnR5h&Oٻ6$+`D^Bbw1 bӽ/=ZjӤLR݃,R)XelRU2dfĩy n wf̫ oP7@т-ݡ++3D_Њ(*(%껡+gӛK&D>x0^ wZu&: nHs]mz.CtE+ls]誠Ev" ] iU h9 条ȮUAWWx7HW*!*>D\]+ 1v*(yOWoHw)$U{QC  Je{zt@+Z@wJٙvB ҽztEiQu0_i7H_'l˒E j6X,ᣡW";7 7?߽{GT2ug9?oD3H6Z,?T^ikesiۿ>do2t?Rܔpͧ'lMF~7).i<{}}x?xkOx`~-ןl<W2߀o}{ќ7|;l7wwǫw"fƐy{n}l҅\grA?ݻ%{1*7IULWpW=%iʅ8eO€ʐS e4(Aj@$ge62u.3\/,n,gH9ҡ[jgU\ӳ_k뽾)%<=|~vdKO65_c?gFcc,2s[ZdVBJa6;%~_ CAљYH ]MR 2Z =$cunhww[Q*r?>fP6݄GKЎ\b"ScF \0Vr0.ȠllŽ3I͹4cɛdO4% }QT+ z;}#f27LR?),qs`PBߌF1\K_Q@TI)\2N&WW\KF5WHjt?w&%`C]_K7݁%*ߤŔҕgXySX6Ursݿsgmrݴ5:홝&/&"~4+laPRCmOk^+Jj뫴TgTR<:fk}56VwUErIQhS*D֖iTy#D>3Wek%9::k}׫洛t>Y_fe4zrZ7ezgzSMW8nP&էqҎ1EI*I;v9;#!h{iwyR1TN5~ 4ޒ jDk8CV1,4KF˝Dr^ l2`#,1KbUPx\JZeacJ>=rEf1ɜ&w; QxΞ#[){[R;Ci 񓜝]XUMWڒ7c۝Q\=Q#^eQbpN{Imr;| u>A @7 AP{2姘<$$FH%fCÎݎޗ՘diW"Ww__#uqJօU 96xWm>h8)SIgͷ9:,N698y{;r*hh3[s+3evK[1ga"cģ`"-LJ3J)=Dh59=+=ژ% NČ^DIQs- \&j*qacq,P\xR.\ *ee|f Xl'_'a:lY σ;g`$H{ʨ){ f@,9*̉h4kJ`66Ah2vb P2#0c7g7c0.maƬX6 YW,^$i(VB2 ?"./' lţcR1|$k-YRQ>J<=(B&e)LA&C\ Z$A'HEYEFMƹ18Gi<c[18mÌh{Fq'J&TMK2fI;..BIglrƴ $ED.<NlI 0Lpk?$jjO8զƬX^Ćy{^yqs ɺWQ戒[PrF9Fm-+uQIYϋŶacq$B PbߙyMNc؉a9<4WqF 'shxIp3~TFg?vQk,5ǣ5`4ZJ=|~j`")g%b%+6T.\6O4ROmyTh"QXYÈ`)EἏ LI(g33 23cr蜶NF6:3&4:۬c;-I=}FBaz~+O{V;$ ADȣIY!5aHֻ @pB3ӨsCTC`RfG )9Z9A>\A'h؉ۉkp?)$窮Eάq\cISMCVX{6|X ݨǣ&vϭnq/!`]Yu]j+%?þWdkӴQ&V`fVBsU . L~W3[ߝr[a[4Ewz&&ç`ܱ_:2%4m\i!) 䋣URuw7Ӏ"bV7"IeD* 'Ƚz"lQg8N4pBd,9 BG %*2g)56E).T.Ht)7 f:C*h%'6L榻 Zau$y %շrkw=>$WBf=Sߊ^bIbZ&9֝9Dz59nEh9R2ϱ|XZ@nݠWϮO]2' DD_F+YV#IX%m`Xu }G}hD]e98HPPfp Q$ J|Kb2cn|0mVUV rӛY4MS<,zɠ2e RbJfйीHƍtYU;$~\#:g8' 1B#ʁ5@cpv{Zx4 ^O:_}g׶XcʬqEv0d:~?}=61u͂%*Vx[鈁CPBSN9B9B*rvRShcDXPYDRKd% )>%cVs3zF(o_ y<z?vQxd>iAWݹ"g]r`'\4RW.I]RoϞs8ّ"Hfpf!(Kgz)`{伔r=os~ww6?SlΑRP_uƢL(R-ѤRD[ԡuK֢=[;8St XZ@+஬+ǂ`*͇l-3ً@ǴO1]EqY,A#&dwtUaԯNo+byݨh.?Z!4:}0{+bFf9b:)=#-͸MĠ{bx+P '61iw6w&}7{ޗVsݟQ鮜?ke29I91 EK= 9ZL,Ҵ.%L +* ӛSF{c1H}49)" s" 22ӄ428 X'-6?8z+eT*cVHK\N!h@l$C7OopUigM"v:H5!'6 1wӸu(CwmmLewOedPkq.vmⓊƥGI忟pH‘hj$16XgA F_ldsA%& Yu!F2!*9ڋ4=].Ϲa#/8{x6m*!LH(r~xxFVCbio2K|=z4я?p 9 dI`fH#-``Vo Ћ ?ٸpQL/F%7$5:N5|Mϲ q= % Vڮp o/0Y=:e64ScgR74U>z]կ 7>+aA*/"(/0҈&^ Sgv˅"YdU`L2joٔm ioؗ0bR e6I`ڄyM3:OٵUʇ= a%ޖ> h]#N_NҢF<8Զ 8Ӫ]8_eykFK 4Z_t*7Cw(|UYr2ڙ]B?Nt+y="+z:3j'ߌFgP6:1 XHLr7< {&5Q/ޒpH90|pciBP;Ijt%h\ rÖ1q̠VELˤ̀*&?0_hq^;c.8 }x@m=Ȣz<΋-3/I襄64* BV6$*fʂP>-V"RE]=I_.%8%~9UZYӔjs # w^Vso'3)Z2gT7žd:>7B_߇* N9fZ#=e=Cƭ"'SS2djعdjCeN=Ef!(M./hAejV1{HxUI`”?Ew;'kvrgSҢ'kvmE߽B:%wmEK}Um:̙횘J˥c*I!FE JD32ZU:`.,wkˏX5md @ $s1;$) KebʄZpH26@;hbiA(,5`' *-PmXY-|ZtgpjOKO}pi2U N @+.M7oiHTϭDޗA&g?aL2jXi|./^IXGdyYQKi1)~Iiܫ%c8ey':8cH%$ɔA:qu{n5,yE++ 4M'LЀ lZOcV.ɍ@x8A46_ܼ]>oF$2&Ȑ7}Jd->k$+%8n4\\ݭimYqt14Bmu|xqM="8N5Ɖ%10 ϟtg{ABKm$.#xqˇ{ckgr箙^?i4k4byJpȨ}kyDj .uk@R|LD/֚]ٲҚTA0Vq6N}wptd*Yƍ6{a J xie2@ k)@RQDx-ư/5c)>mq@u^pu>iaӞYbvo<}c 7gRd& IZyY X5V{0&g#Zp@<R(!H 8 Y8T:B쳱 \H=JSdYˊ)!Yk1 0dF%dCHt㍷Ţv:f hp hQix}{ٲuXkn{@ANJykkCFieu%(h}\߮RѢb-RFdxΙ(FjIz&h+ J?Y+\^Oݢu7)Di~$HtrܣGBGVX8*h D"}p4nZoLҬ'gO%)ʐgO^ӑ~fQ зt4>;Gd6Nw"1W_܍Z~ϫhQ) ѧ)ECa(+4̈́ 59K =9c'`|U@Q/|Dqh;2;â?cl F(4jdâ{Cͷ51kN3 /jqjѐ= _U<$*{5~xAE:ƫRѯ@>sW4wл1G=ŞpQ^[[J1\$tnݥ j*$}96)H7Yk< =OA҈*Aa~î!۸FQ&c,{  rC*(o@rj}a}sYd.% ^2pH r*x*{7<Шʎ?,IH1fHL #!կ9X*/Y97έ[{>{ e,+*dE#H)nAaDRzRѩiq'căW>,{ B 8Q&"TpLD% mJ`MFUu}g>덜 l2znXsFKk::k# Cr.0H:8FF AKg^Y]Z^fe΁ #)[A*Y!DFZ1CޠNOvN޺I!oJΰd.]\8'$d]1TAe03eQHG/c{[IR{77QaŐ*x$%J' B'="D[)R҆EOvD_KdK_[V;߀=yM{y?)sG(mЉ GMEm93 {=`4"kl=7"}gXџ< 4n퐏PL$J" agl00紪[|qM? kʸPMtF}HJo bv($CR֣Xq)AYab*(͑#(nW;t{B:A^'=~Ȳ^3$i ^.Y)Y4!%ahBJ$8=ȹ.jnNxc'^)66/]";~!s^;qJwB6Dڡ*ӤϘwZw&>nh!qʶ^STDT`I&S(MT>18N+CR{HwmXJ}ݱ!N8m#`\v=>E۠Mżk+K$qJiGrD8q(̙ϙ93: T@Zs]& :h^zVtSzch9Dw z6YnQ ҙM4&Ƞ/j&i_=?]$hQ%K6E@NF|Z3 ,Y:>1å΋2B4 n\K#{x5jЉ1-zJ ŔM׊wFBrZ۟ymH运40g{5<v*{=용50{5SiE v6=5p ઒ P]Բqpc@0N5`C WejVJe1vhdՏoW`Gr5n}捄iRXl!WTs!xpῬ|[G耍}w^܋1ci _}dQUV?Οw/?|$쁠i+ࣂlɪ3a6PO)bw@ܝ"@ȭͩ PnUVtҎ*kiTAՊo?دg¡=d\ V5 )ր4_ RN-3rI૸'(x3l%;j=6uAgkڣ=@Z5lYC1+ 1]wP%3v2Pg( j P]PqOS!r CVd++W^O=WLp9g2 P]D1JhbP'\pjE+T)mcnh-ȒK\ʔw*emY|5 &D%~7b0.F14;P!1DkƤ0n0}s7~a6g2L'iȽLQ )ZA%~'ڃf.qS!Dbv9.{rIsW. n& Vsq\4s^ 濎./_f=_( hж$YUX˂k]B?e5e}>onFQK/JX!8RdL8 qCŲ #eݓԼz`{|eKo%A0| 3̒Ɂ`IT6LZQTic,)i + P ՚'MJ'M}1bdgP! `aqP(q܂?%2yJ4LcMm,Fic%x NXuKQi"Ye`˲@7 hVȮ R7Xn⛛h6B<\Z)zT/WtǦDP?L96L 㪑ZA4L%h\W6=pf+^!Pѹ º+Tٱ-{\}\1 [xWP MTjBu#׶[7)W(SԊ{Wc[</hi(++ W Uvw=>4vE> =NN*u\ʎZ2v  P&tW:F\i-u-os J:0bhe[vr Zr˛+ث s3yCKy&jf\(2|LFR,_Q89 66HZK$j)]P%13Vi$ Pfu\J޵M.{\}\Y" |pr5 x> ۱)%D㪙`{CL-?д^3c/ha pz\pAhFBBd++e.BuW:F\q_.PQ Z @-BL:B\ b, k Dd+Txq*:\IɻB"\\s5W ہ4M]exWVtWR1JFe9yW X|Q.ƻBݟDB:F\iU_ٱ_ɐS!RQu_r Z؎N: Po@f7\-rP^H!ova fu>a5 jOF* =\&8 r3иj&WԲCJֱ)zWվMO\e+|Frg͇P-]1+DNBd+{5R+]U*m#gOa3 ?h~ha%,ο>9{fgprgg~YU\/M)5K2"2SMA UCSɓU]6jZcg'yUɎj]b$ThYJRe( xmLK5%E"-^bOZyn$}:\W’;ymDf'n5/oko'S:'&|1}֪nU x~TWX]ug]81Kl"ђ߂6*b"AҨh:&Tbf-gOw]ыŻcx5ߜ3?LƱr}7v˶qdz=7?s5. 7q5}`g{- aXceOoo:vceYV+㫋xۺ]R=Q |;㟸2L(8_'>:^cv7kYPxg4^k :_,.ó3)jW)yj.otI~hp}ʎ/AVYW7i1xOV(aURH@mS؇q%2G7s ӓ\*NMF8 X-zYX3v}UZ תC]60C/#jX)%%&.CtR+ePgiF鲤R<a U[k4/c>1,jq_@Qo3w& ׇSt ϋ{w%0]U[=( GLaݟqqZq>oY !D\!4MvTʕ\bmp^Ϧ>uoȭ^`N_K>67W[6ҲOXܼzgb vH ^2ER0Pǭeda⨅ȒJv{Sr))Y2C7Ԧ༷TDYIbhRWzg+xXw-ߏ"{W4x~#o}ov.P7#2g]7?eTL=t}YG6D.{vq%I-9xs$WRG/}i*FD2V;jD[FTA(mH4~ɱ@C:f\r|2z -w|w8b^-MC:skd).,vr/&B^i{l|qY[Ǯ6Sni(A>h!vwE5kqK>ĕ0؜oM{6|M|XؚeB2 egeZmqfIa"f_v~3|{/O(x</蓂3$ޫ ħӨPD-%.Ǣ!\C$^憗=/qJ%Q3u 1d&aURIjح4tk!fWݚuKm2uOރ}Yb5Ƙ%\ZK(uiCp`#R$iZa)vDR#SIkb"A8󱔑@0ive&<4bB0vǾD4-D'+|H{eB -i[LRmkF8I)t5M9"DI%E5O&i*&L| Ūsui5+ٗe.ڞ=x`݉ȂL֔rj@G.ZEmT@pH%'\|.vYǞ<-w a5/Eh`+/s`$*>2G^^}fC.($d? B"ϕON{e>y~./Py_t3Bq9~y޵0ϕ{n&H%>&WmU%eV1'Xr'j"QJbږ4yWǐ-A/seK[C<@@ ;ƸsőD)cig2PF,7J5aAEo3 ;L͹ɾ;D2B>RHmv ¹s[>lH두!joGHY8"р fxt.rTH[z"\4R}qנGobd݊# $ ĝQM('30SjdrHt݆(p 8M(rneZGEdQ0A[JK^ҿ{BWl ]H :2u_A7*S"‡6ў^'O]`[H*CGj:AG~\_fJɴĦ+.lz"ȉ/-Ŷd2JAD=Pw<15Պ݊hJɌEJPpLibAHj%܉hc2w @k#ӅeaRѻ!߇.-p~Vky*^ӏ]* T a_:G<1ђhD*4H 3F0.*씉Q"L<SDc@DX,-,"0l\Zdgs՟V)mSv9ڎw,akW\r[=iytUgɊfz^w}$ݕ)EikY=saf-U d\Rs-J#$rX8iSSh_7sCuiGZ\T!ݞj`le*L5b(׆+ "Йh.p4A[Zg (W$6!`BOEPa*5(K)ƒ«%x4<FPKlW.Y:%=К>SM7GN;$(3տny-+=):.7@J"N-!*bl)PqZ{t` {Gݔeb|iÅs؆Fc3QӠ8R"%1dNF03,*͸Jb,#F 7l\l<An1+{lfdǤ;Q`Z 2d1v&+eVXmg8lbvkOhڄ2:+D`qZ:\IpX &ʲ9[F*)1@H a5K.h J̈́I@F)' o;C1s \W&?^&"q1$#[ bG.b$ccƽK3T#?*~GP dAҬoMYNF˗J*3GQ+rqXqx?4eh,T9L+sjZud^_Ϋ|tr\x ήdN̙~iT/v@[mt_|qa!v$M5#1~aH0 q:fy`e`ZtBW1{gI/tI2WLU29l .u=@~"Ya N9t*`O\wԷ vO߾9WNyq^ &/O޼z`&(6(3 ?ލu 6fCS6w9f7Wu9qd ڞ(=.~x1_*]uc@W_#@6Y]HW"SKTt!ܙ"DEy}B&^p&n'Lh=Y ň\{JQ4i| CXE-1҅dõP+3|%L x8 Bȍ}z 1h{HGq$Ovf)2@@7,>D$ϠKHGAփuT9q:k'= 1O=@>. `Z!4,`&@DI$`GGӟ}bBKFyCiɖl3gu+ZL'u0uXƬ@ޢ  ۔L{(e9EƸ 2YJͭr4*k@ 62N6`#*$;A 0Utts3YIlsG{O}]]:E]]kYrWw.g+&1ƪ4yJ%*5 Dݗm(\}ͼ|w?Cjp-({r2(:$/4LwZ zӉ?dtu:z;6>0G<\2}%ÈdN1NXm=y'*ϣ/~hU8.eQob7>d쩪LIyr^l=d} <$}_fR 5GU?^u1yfLFoFE&H [.]X|Eggl|< ;kx4'Y=蕰+8F7֪a5-nY+7pW]Tp7bSK ^%mmL䵤UCJXLR:NோIl KhZu뻇T7t:osC̞~5sвe[7͝71UM^oF^y=mT )Y?zCǍ2\5[s6YMLN\!owվX k鐍K5𑬁2^~G6(JPr];$vS^z ywY'n}oYv@%se3S+!iZ\8>'7XA A8(+N2x np<9{*zS<p`AR1p2@H$ρ xJv^Q"<]Ug OGad8|d^c9sH<ʏ L QNDxD Aj;ɒ;YWE n? Dԭ@Qך]Lgs";K]M]h<*v\kˤF40i{P#6uHK\`{1~{1GzWiKtۿtABehU G(xR;<0 WG+-L(t*,B)#qSϓmu^s(~9 pe2ϋ?"kKϥ-ZZEZ"`^j:N\2Yr>i-H[yAu|WZL9OP洫 ,%&;Gůl{/SGrE Rloxدd~],=Er : Ӿ>RQxh]`:7tn͜;&8m}e@EuT3BI՛RBIv&(cgŕJuB,}۰}ꚂnM\795ZR+~v^dvǗ*5 G Q>n?.^\"UTM:? ۦ\OZ h@Cu+rڃ}մX3:Xw֦c.%+2 }3<;?}}sKg6@::s֖w5d-5NS+trΧL͒#,ϯh%_9ݘMJlpztj\ȮG##9TZ^tH4b*'M ve^ܕR2?B\xO QkO:D+#e])á_)󍕍"hO#SRs؆L'FyXi1GZL-Gn1Q~99K'WE#-Zh9 i+{aѵ[(A1IE6LK<| 1QԪ9\%EO8Ceܥp?7_ڥy?q͜*]_R8ǟ:5YZf>myĜ'« zs0gwg29* A.w8 S/8G}oYۿ>LZҏ+)äL8VJ]d),Ҵm~O }Y{{(u-uw_Awf/^|oo{ԧh9NmcH5M&ofpw a>1yѲMmMZ6J~:u r/d._J#n#ƀJƥqe˲B[%eʣf1v]:!psMT89zx5owZ&x#b>kȥt@jSЧAL[#KwDM;!w!ߎ4VG+jIl:bYR$E)RSFdaF=뛹TXo|Xk|;פ>r"vu~ wMiQ^k.IOc'Xg`bˣXԱJCkuxs{g/<;%.l/X'Xq4m,$j>Qm\Yb+ a67E%) jjV6nyjJԜ$eYy-IGTO%)z-lUNđZ$2mxJ^p>XYlcG5CZy7Z".)\PN@Jp-8u8}]Ss~۳ D!65a;}}Ģ4 ⲰnNmJ~]?4) L'K|i@fw{sI"<7u8/nYJlNu ozL0|l A%iPNlF>̧86|%0lGM9'vf"ޔKN`wF+KdѢQqx?b*Қ4Kƭɋ d8j ǚ_zD[[{ZWZ媽9m64 >:ړ| c9C<>]<9 &:,à{!D95#d̋7tڀo̭~[g{oHϜs(1%/On76I*u&ތ% PL.eS3%9p|M>-JF=ַ&j ^칔IrTlVr ۯ{`8[Rr.,5ܧD@ojuڝ/('-;iӀTA2qnC:K,ܰܒb^sxex>Ae¢ߩݶm> :AI/FLty:Ni?SO~ye'η)hۋvskzSwbe !^s>K"yD3_J;z{ ,νֺvVm^#V_I6ӻ훽{I\?}D*|?Y<|Ϡ-].uv ^_GyT~"麬۹&[Gv=;G4FpD V7Nʛ}a9`ǩ8pӚwdz󌓷L^{Lm\H!Vf{g$B*j5fo9,g%RJl_rN#B?ҹ]v_@~,~w,w/.FRQrZR ekť,oI%pi|i5RKie!btBRk b6O]';/ۧɻm,{]+K NxabmPO}k4hZ4M5.JTl/?.br:D)V ԙ4uO;7TaPu/Anc46bdY16ZU :E#s]tX-enpF{(3)]JKjpnn#x \L%H!BBvPu`a,.|EuL4* ͺRWए"+kDeHRtOyXRě˞ᬵB+֒JjGO~~Zyvʡ+}p:B0nD7rk Λ\j"6Jh]_z"])qu_0!c s吴 fkZ)2k=Ҹh, Y cH !JM̓aK)DZriu.#!$ ::h{^$uDV@VxN>ˆghvMVe%xʕWdl1ˆ9;`-ةs Qx-Xl}/=/))Ppl3+Y$*UېЬMV6ux#.#&aaٍyiA@Ez[-@2!>p$zH%3A^T.᫡/1 m9;2 #wv`uNu~D:I ]%)0ԶM63/FPi0 vMOGXs@ f.p /*@>"كtR"̗i e@7 /+]',M&bT8zq@i=J $`:`:l5C̱G>,,M,u =ٻ6W?%۪}0`/ 3j3HEL0=lQ ղHZ{nv߮˹nP\1BxгXemt",U@k6֜ÿ?pay:0M`Y5 3Ko^5RR\WoEXY hߠn$LztȾkݤE !k/t.YZ̃4Εh lܕ/ũ]v6N皖-a@Yƒk!m|B$0z`2o6:7 {OQpRu<.֐D{碦FrakP* jh/tBe_j;J*΃YkdpR2 50ܲ=f@|eߟF`J8nȒ8{ H=K1lPb.⌙TV^Bp4.i3ΆbU\|]v)&D,ܬ"D.Xr蠀)Z 4|MHuiY~ޮHh;g7kMK2'u^Ӌ ^,,e&dgs-2uA벓P:Muf_}j S.h;@ڢ҂`+}pGtҀ5;y?(6px@AZg^""D< x@"D< x@"D< x@"D< x@"D< x@"D< T\0R=R*I<iF< x@"D< x@"D< x@"D< x@"D< x@"D< x@"KYI:$ .=R*F<rV3M< x@"D< x@"D< x@"D< x@"D< x@"D< x@"KyOxH< q<3V=38a@eyx@"D< x@"D< x@"D< x@"D< x@"D< x@"bx@-GFx:ڊwWOR~xTJWݏrr;irhKXKy0&҇B[jҚ߾Iiі^m۩jkFHcQmj%~7Z0B:{ @;cvZ1[.ۻ?7VVˋIW,Lr~mlB_/' dZLtũ?Ze~8>NgCI;\=K{@D&!Ra._f_'z48>;oиԯE_X35U:OZi:ةM|vmx@$N,OtT&$pѼzmHePm֛u!a\St[>D{ruۦm>[S":%YժKΉoĆݕ16Pgri@P2&MJ({gTl?wPU֪}wWMJG+uǩWoսuL<:J?.]:뜍K2dR~>bopWz.Mӿ~Mgo0^.|+xWwyBosxT_s#.xlREʕDJPSULSr:A%~2('Xr79+jAbb[ _.M |~Q/\>_zmU|pS.kuz2ͽ;mOjg<ϒoo1~n< r`Ɨ'kZ7.ni^|Ft滯{F|5+sWm7F1]*ZXWJuCG[s2xޛ,FGJ/ .tͥl<ˢ-h%i)|/jfs菰f6?~W/'/ŭ\Oӓoޛ*v>Ayuoꁮ<É}ߋ+;]|*yxȽQU654?`4;Z,2륃?Ywq2ߥoJHi9cjfYsy|ZwYr!0Yn-Q[^=[@G8GGܬ5wE EkP:[ Avr>e7'''W9!M !AQ@ }Ju M@@UM "$n%Έ^p2t p@78Cq3p;B|WH5AǩCLf͕7oCnb˷C_/6c*P [\&˭ Uٺj Smlr~y4VG,J 2 gd2Z+M;v ,2VSl*FSjMy(%Ja2{{K AlF`z ?–bAmԉ6-^#%|:ykDtd}_8; s2Km{uZSe_ )`&2\]"$ZAtqByI+W?ˀYýxÍ{Åy:6f|vw[_NowBxMk_Y^Ǝ&3^%)*S.ؚgr8Y<{\7.x8(_bηb\\^nYUo};==;)'y}Gރ .a e\l}5bQZDU}c8 MV %l"2`E]DSaoqy$T#A]+oT2r҇dJkkeڒy g+0֥}4dWP[zEY ) ᔮ?[Bglu:'av}Qv?wz]uEJ$*ZlDM~n&[68:ڶ0KR4"*w*)c rt($e $ђCKRG!B!QZcr5d7ֵMm5 odx]Nx9luĩs;N]+w6=v;דKcL5-"Er xu.O*y׈g,x-o&LLR;. %cpmSٶ,BLD&kFv[ARv=KQ98a-Ȋ,,4Yblsh,ͼ=깈R-7[sh@%v+FZR4'3Δ% Dcq6ecOZ,ء܋|FZH6'6v^S9'з m *>Q9е8[Ώ x`YUYd]0wŤ᧳7ʀe@{wEeL ?srg1VƝάql!LJ+oOPl ·ATĵ6A )Ab&+k[M\QL)0 ,:tX9R%lՙ d0qk Z/?0lSfS\)U bPM=m5IZpk=6z%}6oq`yN۷|my51MG韧zΠ]XZBA0FE:qt-S2XV2} .{W=נǯ?!Yğ^in{R<PCw~!t`[t*.Nh++ڨ+'=6$uc(%HDJXem5;ᛸ&MZ}&T PI΄{ O;e3RtT&}>49Qr:pU;1)mq>pbn28#BZQV{W)mq T*x^dYkUev3nYY3L j3Ho2=&]㼞j~{}-bݕ.m3)#9+(KּR]brvq*>jGMe,VeK`л;u;gDRNC!-E8kvK`ڼ;};Tv1-C,F>AЎƄ LH=; mIΔTGO),[μϥ>F, Cp }DDp*;b{'8Ȳ`"DpE=b˘ REf)y9Mr ^)MJ*ҫ#h>D.Auh8CxHI"˅,R-Pz Љ~uI)B2uۓ+f`DnH(PER+)nNZv{ZPͤJM'phӽuug\z}ZuMowXԝ䬍-CM'7TtυY*PF1TZYd}1ēDAY#Rkhf6Mq6cd%mKYK6e_Zar9`1nd\Hն١j5RV[B]$*6dodܞ<}?t_ ht[lptRIA VE,!$1JA/ލU BJ9k(yl&D1B(!s2"Bj~DHslDH͏Q!K2 >y`ybLQq% fpYffmN‡`\u+jSۓՂkk\aڿoVw~湸Ǔ={t&xZ"@meFb] Rs3[U%Sb %e16^w“eI FU *+q5q+ uum޳^ю+2Cڒ{6{:,GM~ W|5ueQ_|']w5 ehYkF@B|fMCMBN AfVz*^W_/Ѱk5ˮ/wL»3+wu!-+ՑV*(;Ԝ0 @hI@RsTTgA hu`/Y{fL0!8RZ BF O +\{BV.e#Mњ )u&6[&l ȽsdB&"Y暅>)3h %kSM@ءD{n -Ҽ >zDIun2\Ni^4tbfɋǻ+K S+,p`B?{#\$SFx mPȕ5d6rV4Y((E'n/Jlo~`RŘ^GinrxCT*")x,$H4'!7B(J=>pn=dzƃhtfh"tqb| ad0kьQ!Hdc߻Q|\6Mɹ!Bob5f+8|vkg bn/v/eK ~/ߑw\ u͵_$īV&kǩrh1Hלl#u c[ nn7OOs42j0n4˂DOޚrɻ6;|ePB9[]Nlg7He]\u l[_tؚAs}cF>eijq>46A*&>MZ!fKltШDS;E#1[L}5TIgN:S9݃qF*Ǖ12ZPJ ;d2%6GABP9CݔJA SZͨ~WYW]Qb}bLHmMٔƇiɓG!F'`pÔ֤""-c2 7 a`^ T$>̽ʳh8-u * ԤIy_D\(xLϼX+bJHNwߔ4b{<}D%\}s+&ȆG1QJ"XkY.c rrly_(FĀQ+M XŝνcxƓiMZ[d^Rq0DkJc5ȄL-H<$GxoyjUpV--dzz9ȤuuNDO@Y`tf< aZ_M/үQb?j u/PT۳.)> Y /0EA7.x G&]bQYGMxƒ1%r1)MPff|/ }'oti@FbG'O/'Ɠa6p6Ǔ:g4~ÝmFYYo7Hg!2F+W^yWsΘl3';mS6V|/_~&bJ75b| $B2 C_h6<;|9-ZM_ɚߐXsot^ql -i٠ K ޼_bŘcrZU)hφ'>-+MCJ\jWahDK9 G[='epIh٧nٯ<D6X$aaS;A> 5\~D)5|8kӧnWVw_wv +HNϩ[i7]umn]n5SFmf- 'З zM[}Q9iTn)'f'}7&!oW]="lZ4hz#:ӻPܛh\1E!nNpYB^: )~U1ARBW֦fݯZx@U @O)gZ %㧝Mj-t˕͈|=/ 5 oIB(;o8}7eФfN2pcO,B^nI ёńt6)u,RL-Ysv!7l5Igu[tL ₟; -ک(BԈ}#<QGbFkO3 8Zgy ($B@kcHMȘEcUzV"<{f^ -CVƬ\6 %(fYB;ѫb l~@uoHڸg;įcz{lڿ&ʜVąGoq>2VjY=5d E&m$jWYӜ,c0xL}djw蒩wQĐ1@Y\yi9HX0f Fza.0;U!AĝyL\Л̣: J̑\'LF8[0goϗP\b\=٧O5Y~t`/}ne/%SUoe}r3_/Z%1`ފϻ|)&eO#_LbHoǓXɃ 혴ZO|Z?/&=jXeyAzJf$ʷ,ҥg.|M{>9YO>@hЁd#ҹ4z8-n|dй6ݴﵶa[]š/`1Iw滖ZtRtYMv4|׃śe-k])mـV87_],k?џ A8b?f3Mq(Tko`,}Czh=н;x{ɲ>RI*[Sk/$KJmJ>6'_I`o$e2,yD$Q-Lȅ`I*Kr#TJ& `xLuUnw|Wl[NmzV{c6cӦr.n\4YeeCpBZpb,]n[9!1d`& P*TRh:A5DI6{VRʒ1zIXo!:"$ Q%[FM1DfpZ,Gl/VQƁxJLks0-PD6V`TCH*BR\b Bk#ZkŽyk+(Ǯw΃Ȭ\+}$$q <]Z?O> CMӝ\J9x "$C%gu!d $k QhK31^duQ)INHHѠ*F! 3!KW!ʍ1f&OAg֛0)ᄎmw8/?# R Άf\ߤigI90Iy>TBVɥ o={,}n5,C<_﵀&xW45RCC(T%ъsCM&>u'S nj~oQ$9BwXD%/O!𺞳pZ;Rj;?]o9mԹt5s rlr ߷$.EöE#$[Y7 fF["'8 l/ǺHXr'Ӣ`qx;8qu3L1T2@owB//~z~>^_Sf^?|_ )@{C~=܊|;T"jn]L<9d{hbC˶@|0~ߟ\|i̶ŹU9N8k놱gs"I'̃ HW.bM!4~B`Q\W9omTੱ>3QFK9pT.LSdT:$Сm?Wյsp )2Iu,A'WxI%t8%ܶ$R +y4ʏ`6|!xȞ.i"elYdx#ѸpjWg!T6q; l;<)KU Ude{JתzR%3N}B4䌉1Gfs)Fe梏NșE::ὅo) nQ$| .Krq}$V_<Ȅ,nHP%qd"+f^&-1 x4!vgT޹z4SS@m_59i2x?Q|۶s4Rֿ=aЫZ:UpWyV ^m{5؇؇䬸2)`s: z87fL2z=0RRƹt["RzYP5(IZcL{ ?L+Mx>0:?F7YI 짐& m|/s' `lJHU%9#`[9eDhi% :k. 5x(lf QJ)ҕ6Z5I>ĥzչ.\͗:Od$+Xg@vI#t~x~T'Bn|NwdE9$ ^ej3z{t躯}5f-w&n f e` i͜-Ftkxj\/p~Te?*RZ ˼RN($^ADxmF5W*֓} y#5TV9[Uer"3kIAk RtQ VCmpe1Dl Qt ѕc. j'F#npnh<]ҒD*w+վEO$b b *tBk;tu:tńDWXrR ]!\!K+Dh Qj V3Y]!`+k(l|gQ$JCWWR tC-]]Ie,I]IM1eB QvJitj+-gf bJ+DOWR薮NDw /.+fJtBj?M2|Iq%LˍQ7}^C 3|eB7d+WٮĥTB !A~ttYP[i٥jV*#9i7zC~0N6ii>66jIUfU ceU09,Nz9ex1ZeLba{0deNHGDED [`|dUƊ./]˒^ m38Ϊʺ+~2lMM;k~1z;\=m[qH=&F meb2(ct)e= Io'WӼ|t}[F=}Пw.%]ZMOWږN8.(QBBWȦ+a- Bup +mUWHWPMJ+,i1tpU1ADOWrmӡ+M BCWW3Ԏhj7+ }+\-N:$Qݕ0%C J6l uUu @A-i \iK V7~Q XaxAt%e,D\7h缾w,z%Zuڻ6v/ ǥ]2w e;Еnjߢk& +I1tp=EZCT Q *`•B4a-] ]qj6CWR rtBS+%6BVBWVStBkM@KWCW/]!`M+[t(%1-] ])&f\CWWR Ѫƫ+D[uut$uQB e6ka6sij1#k(j{.fM/_ND^}/q:|,UHmt/c(e2{_S,#ukz1ztaY,_x6ݠ9mnዏj0U -کهu s.% L:հssuѐoZ"† \Pf7w\j@^Pki9;Һ/}6zEch$U1? /1zz1"|4D߂6*b"AlTSsezj0Ͼ_YDiUAh^p)}08{r]ӓi}yx5l,i A,h#'< ;Sx[jDT2 !kwӢ÷чzc35#٭G=^ίMfZ_ÓEA}%mva[ڞœsG3FnxW7=_rލ@KMRrv]M*2$"ߨeЫn0+ ƓmިKjW10g^d^>Z—_nh![ nl!γ!TOg8^g7rhS.X Sw2^"^W@Oţk/Y-_(FWǔL0`4O]^$~sKB B7Vtː?1ЁxA Jaw?ҊCPVX PGȤr($ĺQ%H**˴"S!;.iT$կOK;Um_l :fmz²[[>aܯF@YZKQ':0 5qVF&s+V.M麲RRƹt["RzYPEVBHZc ^JVwd87} ]]c jM9PT䖢Z>{CPWZqvl⏽dC>٬[B;vXu3u2]TeTu.B^Yj R{i܉*O0Ҽ->Zy$I$1G'|k6=ѠaQ+P1RΒs+0R \&V~*iPM{W{a959Sl"sFR2}L6 N<1$)Nu: y&"eN qGm`*Sfs%gx!*\&`p\l=Г>~jfywH4ݽK|? wI6 ;$` D4ji&`[m,m[Qn,RU_XӳELjyỸBw+ŝB ',7YKCW㖯Zn΢m@i)wDrϘI:g֩`m`04Jڛ-&ǭ_u|0vnlh&-a8Vi-yQ‰d`TDDYPw[{A}f(d8x9m?eDF&EeZYw # тKkW_{9ޚ$CN5B j"$If9aM q]pp3E8R;Ԉ# *pRjeuD)Ąq&OZ5r]hߏ/ϯ=\Q[{?Zif.i1d'[I1Pg#5I~s, AI=Tůa9]5y"uq"NY@oѰJ#}$Ac ;ePxE:s8\UK߿:&/bߖͻOHt9+!ѻHR&'+\5yg`RQuX.$er{*FV$*~9D''Ǔ꣒k|(t[!ƪ71隸W\gK(UsЧŵ95Y^?k7?fo/^M'̓׫61FYwM̥7Hlm]튃YmtRϿpKAڑ,/֎$.ۆam"I01QyN> ]-]&hQleQsKduB >F*3[z ;nT!O47.7@vˏ߿=o}uw?9{3^} w L",6x0 x?l?`] ͇Fl3r͂o3r--6h{[kaǫAWA>z= #,wM{teu=EF1\CEMoPpХPb7 ЖUgS$u1䪍F&e'ǍL <5G)e3 hȘ45j*} p0Wye-=0WO4: MI:d&i@,I\'< N- +:Tq2=}[~k'֞[)3t97ɼއ,%Ƥs8XuyX8{;d})k.rBS3&.@r60|p(R1xNxop$$uxv.=](ev{LSeW)d "BT9frx!{=2FA0@NIv" zDr&O;Y+;mb]ܴgo\=}3w`F=U.hӊ@^t`-JUx)w$^X5V}2&%Re@sTG#c>C(%e=\WuE7>![qe9+3; e7癠4BΡϥ c`nh/h{aOfYfd( mϐM|@8uB&C NѮ{3ؙ8Mf'䐬bre tVwqqBMFoNEM5ˑ]! BY-Jn\{"4//cեg`.CG?:d<[(EN)ku ßM]Qzb:-FN_nSi-'m Q|}z4~ݕ"yخvZw`ewe3K+bĊG"甊AkM,KFS'qaW̻mBZP#AtxE8VsGg?_=T:8F)g/^-T2ŃɂN874~\"q)(a3 C9םeʽZXCt2{-yDvPYFM'ƍ@i zc$cΝXq˴:w {1pPRdVpU\e<)/;8黯O5VGKКBElI |9i,'^{,~xˇl:/?0eڗ¦T sHD-Ezp^z:R2e>iy=jW|5ɿNoʫ|D`*kR4ݬx>] `B3{R|^͋%6X ~vY[ru38j_䰠|0:FʅP|S'q-#7}Wÿt̓;j&hAHo)⃫4p97?g"IUMQ.t֯+o+[->f9 FSK˽mZ4+ifg9PGm;Q58\ U]JVUsR|arR{zC$Ԯ'@P4Q:UHzZ|5!_5-֌8Qt=*,yu_ iM)PYVBe&5377\ 3~N넛F{𗷉c%oGDEOu@ǿB]up8 )ȅU>ˎ M4nR=2M{picޑpu90<9l,4|d,yjIm˳Enآ3l jj&MT6j|yn1+LDgWj1Ygl|Z+%ʔQM=OhKJAБXsQ+J{gEӋ,x@wc0.D b5W{IH DJp%jl2h^'J/s r*w;V<^MUCnez97 K}۟tȨ~jKgTyjy 4j"1VX3A)>/BMyLHfϓ%S=aCB$>!<3qcf%, i|LYLv[&lƀxTjUzep AIO1(:M~?hVs݉zMrxv&.͠:U]1Lx"6Ї, ઺3/z%JZٹo ߚoT|{ߺSݍZL TB8i=ibNʊ@52JkVe௛vHV`,n ؽtȒWc;[]_p-W 2 K7JX ]k *PT(&pCqsd-3Ä;#e]5dX*?a)"6fЭ(tB][Xok-} Ik*DomU^Rd%+# Lɡڬ)DOаPmW|Lj4$sUT%E[ԱxJژwH`jU&β|']ԪL7=ۤZ`N8fwvӁ€rk;@wEd3;€;bΔJpO," H\{N_݊Q3OapUioj=j)_hM3 0X6Lw?yKMBb#@^LHq "J5w릍/(ޝR%rj]'(]] d x?%&)](K-bd ZaNbn%\;1e1b Kǚ$} zL-FcFQ5dʒt17lNlbҞ'׋k=R:qyE֫k0y}M%O>üt#)0w?O/YZ+q}}[{+vD.ТٸJ~&O B_nWNe"SF֮R[%ޙjٵ"h?O^h2"O:)ٳtr"svg3l&!س ;Ґ yeᑦL{;}l-:,_>u~Vt,˴|jqщԑKh(חre $Yo2`4DL^C$q&+,jT,v{# ;CDl.֤w*fٻڵg]̥r^;~i=+,K\C۵@ =T~8|j;Yo(_4{kymylsJ^f-7ysއяMC/z!Ұy[a85S_Wu~Eҍ/zsĺPk!@=֖{rѥtTT8s']i7G7]EEWwIWfv4bvʫ?+7ʭ8x`J㷟UZь/xohWR0܊]̑9}ˬ%-5;/ܩW?/b7Ka"/%YEpUrs<|bw .>p̹avl_׸ϖG~xrZܾY|-?XhVfrÇw=}98($׳Z/4(+oY-RmD -woukAoaREYIټYMg r1uLdLlj!Qr@rPouJ΄7HES͹ƨZy+趿O$o)Wn]JbvT-j7Uzfƾ\h{B;qQQwMoh 88>sMZ/NۯWbF'k(FSK+@NtYoT0+ Dࡢ܆l$&8JL3Uj T%*g ӌ.9}͎}YzfmX{RV/@c,U4f%r+U/.&b`Il:X+V((_!2lKB5L1"sY3 i>;6y.ˆ͏}̈~bĉ1f*1j7V-E!⚉|qJRP|b:hj㙘cX\qē6HHcj n=3bopk‹7Ҹmˋ=xqG]&¾f_g1 .#Ӝ*m3>d J4ؓx'Ynv`C7|ܪ㕠i$->yG~&gUʛ?>F{R:ɒ.xGcmv9>S=_ƻR9V4s>b"5c.fj4%x(g!T;ڔ4V9F msLS˹xv(xۯ(]im]=C2d#`4tஙJЂ<]5&zt] `mx4tZ ]5~Π4Nt GDW bW jh]spF!]Y֫ѕv8UKz,t%h htP:ҕ^19 ]5~4OQWW HW-X? ZzB6X4Tf߭%n̓6Wx~L0g"M?ofWS {lux/sy̧wpݠ b~j>g72ǫw9I&6S|w_Yd,]]t]kM${]{8М|^ˬ٤.\ O>}ǒt5(-WijGΌ\ ]5NW %O9*] `zt ]5~N|8]5k('z>teV4&u%vX誡=ѓQ"Mt i FDWi ]5BW C&u4);&ڌ'vh誡ǮJƉ#]yktu}T۴b xFOj}U6mZEշs6@ẽQ iDA/P/E74%(rnx 19x M=޽骡zJo97`j?NWnh4UvD{w+=վC 96#+LFCW .رUCҺ!]xDt%=G] \V4jhʵ]jOcr0+ ]5BWbzʑ_y,V?<$ٗXXr$yv`{[QiB>ZS<́>k`O“D`}/f3t5ͨ+g:n?ͤ#+`]0vՀx+t5:t(m䤫߄]wa3t5_}a (1ҕWe Հ+Yh] (~ᐏٯ1ol!Q?o5:/4{E%cr~!'3dsΞoQks:SBliT`6 af+@+_ sQVrё!vv!C~t5(t~>q 8~s;hy|bC%AWv=9Sfjw3tECҚIWGHWEfCt5G ]n0v+t5в:] "xͳ~pe3jj nҕx'~K +ՀVj vIWCW[ZjWO }n^h~ (Utҕ p~6CW ] :] qHWc w@}.] O}̓<ZȜ=3܋H,,C.ާD7ฝ2p[ mC ~l؞2ǨBtkP3>0N<僥IWCW1ZK[Q4ܢpm_{P2Ѥ ]_(^ν>кgJ`n8ރx=E@]8z ]Qx%Z2rt5PҤc+h† ՀkW}П &tut%Fk7DWpЕ ] zt5P8JM+5] bpOWe1^͖`] n3KmNW@錝tut剢R18?'9BWC2c@RiO4L3t"Vh!=!w&L Gd{zQur&lpS z OeY2|_>E*#J5'|wb0mz Miw馼իWwA1~;O?_7B^EjD>x~yCY3KzϿ}7~K$/ڧFw1o/ۻhU8B|qtN_Wٵ+$voޥr~oHϑC{Ͽ3ZܰbBzL~w1{On[sG ~o39u|?G YinY8<+]̓_̏@bߠ9-OF󩢻tNkeyw0=d՝ؒCIVCoפTm&86QfGiw!+Xx[|$ #>.~5RV_hmm?|+ٞ=p79&e[Q%:.+%,1?) Ʀ ]H0T ;ԂV}ʹRvƅհ7ե٤H% /]U|" LĘHz`vl&ϤjAC"Z D[K!Bm0!UZ0DY4F߉{sfuhѵcd77BQ-R瘟fw5DK] r%j#I͘#$B'җ Tn#D3$f"s EXb/9%|n< ф`(v^]iuT}uIPvُ2VDRC1Oi xB. YjO! 5!>:D!/h4Qߤw6R![^yü 9 9K,1~]P/Ss.>7!ИUi5ԺW%iH%%Qh1HG*7@RcI;z7E"b'Qj,)%9oZ|N^LZBQYQk\.`5 jk*DH,8dfm6ѓuzMEOW͋K Śn\3>EQO}Pa}Qd\` :`՞%C"XdGhO ٥f ӄI2P4@ZE^T\0`gg0ؽZؠA[wZ 4@sxC9nˊ#hدJTV*.:o@2h 3SXBs%;ɺ* Ņ̆XW/Wgi <# ;7`z$uc!s6^PPB kGU((IU@Hɖ R NEb> 7ERS#fC2!E6p.J&X(B%f(Q i[P΄X:p6L*znB AvEV܁p 7CA 2nP(SP|FB&X,LhDҝk emFݚ uWНmhd 64}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@4}@BLf:ljN}@NMMMMMMMMMMMMMMMMMMMMMMMбl݆ؒ\7_|@HfyMMMMMMMMw=nZ%1K}``΋x&y=ZXnMԒűϽUZU*Z۪<$//YAt@At@At@At@At@At@At@At@At@At@At@At@At@At@At: ?Zt@Zht@'=t@At@At@At@At@At@At@At@At@At@At@At@At@At@At@At@At@At@At@At@At@At@At@At*:;kvu 1K]׻zh" (w\>l4vXp$HF "|-!/[B"_ȖĖM=УU##U3eG:mJEW]@W6=0+Bt(QNךxDW~M7EW{CWVѶԁN8* s!++/thh;]!J] ]R2] eBWִ~2(tut%9}tt[zCWWH_ jvBV:AR0UK6zCW2 ]!ZzB:AS]!`m+k d4)ҕm>9)=&<}*mpG1[cmː +a1={G 'zi4ͪGshScUTpR~{r Xhy\ժȀ+#FpcOW҆iηCWr˦]",at 9vd#؟蛡eGZiR웗l@W2ծMO%{DW o *}+DM QJI0'BLBWct(k˗N"TI 5BBWV S+?i2(d j ] %c;?]!J&HWR)R; jCUBtUNjF"2p[~JkjOkW2+Dl Q0#K(t4pKR\Yld*D ioCg,q teShSdE<ErS.Y'8ivPa+,cGE8 pjgׄcY/$Oe^&kkdln$朧Hm1#&E1!>-IhnMcȪm%DZXpVKgưD:D""4mOƔTF Xi7 /RLkA!\M'խ!JLgrV:Ֆ|}EWWxhl;]!JR[6㯓7|(fpՑc<=֞P9n]t]HGtCқ=v + JE tut`.(Gt}?5t( tut-WFyDWXk ]!\ ]ZKD QrJXe%BBWVT4 ҕ"&XH ]!\}+@+)m;]!J.] ]i-=+,.f2he "Jm]"]nkQl4ve3եI^>euQk˳0IkSb)$-c2S)76Ĺ4edNaFpԋQxѥ@lmRnMmq:j˘{,!:}imcS2)@#4/^tJ 𬓌%_J+hn@=&eœ4bOl8\[kӒ=6GR\M@U@:,ٟf)ezDW{np5m+DiC@ķCWz˦]\_kя~UG^kI{ eޕn@W:ծMOzDW*B7tpm+DX+F o wpm+Di]"]qF)6BBWQvB<)ҕ`I s]!c6 hw(k:Ls#<+,C3dhVӶ4: ҕDQ K ]!\ "ZNWҊ@W'HWsBlpe6 ˘ei`UѤnx=Z`Sc+y+^XmD,!ΐT'}'ݰ'Y޼՞Me|Of{5*.b IF/61v߂ѿ1\[FdcVcr'Q/|U[ne]F7@ -|`P]j̰ͮ\4OgѨsNZXijT.Zg߃ϟCuY,#ܛsY/}Lþ№&$YMtԺ(w1%R Ô7ؾ܆+Uӆ޻PŨwCTChO* 6^.:moSmr+2mfnPM]e=7ܠ/4VbEt>wrU~PQ.E9\xSd{m*ouo–g4v/ݮkj{\$~d  0hcl?ސǭIFncβ&Gn|ϨnZÍ_Wǭu5zQB:b2m8[nkn5spI郙ERoC|'4ocI$'vrnͷxFG \[u1g;+gd$hH{ 挔EٽRwFn~_x [}v<{$mGFa㨓TXAp9VV'NoR vlKӊ6Gw4+-g3-4+jצB1!=+l7tp9et( tutńxDWso ]!ZzB:ARwk؛7C+U Q֖/]]i.k1F C؟^<')%,w"YYc29gx@j-0cqr ?˯ch.L)?]hp\,S"ZD)k3-679c]n;׊ J^?z'EO\[ &Yг]ӿ:@"6s}[uiof;/Yw459M 4E*ѩZX;k&gTJQE Xe/uoJnT!itg?ܠtuv݁[ [s\ 54N#q&jϸjcw0]-}@Pʼ?/B-P{qLB,Ҫ8Qd$~W)SZQsu".ȩ|0xusW@n1Q?xvܭᐰ(/y4앙O2߫! QY]-nGBEn6&j>X4Ӆ(r.*lҏG-BMKuќB$*R(IEiTgY`bٿzY%<.k{&S(E)Lp{DI e\h!xjl7%&\:;&o.͵EnI.np(]b/i&^, 7X\V\^]y,GΛNy·Cn/=}WXiW24n`>V?J綗HY C |5=]FM,oyRrP 㭇^{FUD礥&eyX,l}q,V\4JWW+ RhQ:jiv͇I ),2Q.FM0I} ؖ&ov8ܓ=L՟ݥ>%Wwj64"W*E `y.8<&өNu l(Ud6IFKGJΈ6~TG/pP!r%ZV;PuFIŦ!g8)dQKPϖU;fkѳ`Z ~ \_Ѥ4V[b~XqCH)_RnitT=}L\cy|vp^w1JYs|AUUܮ붶@6|=G"~8J3AȽ9Ynݘt))A$٘MX@GpTrԨH>>/+zγݟm. ՑAȗs]rS >:N]6e%I[|pl|+RcؐxW%V?sOo._%_7_9:A *q&\ =_篷ZCe -Fl *F0lGQx2̣| n~ڢ3[$Q* ,|,e:RW_s}p!s/ΑnzIQIIESͲ$+ Cڜ3t%uٻFn$W}K|&`wsd"9V"K${ _Ւe[-eږgn]*>E>UEG*~/8L^UkcZtXf%W_u{j U2uBZeM}]N7:ooסnHlJ$4IumgH&pJ/'^c/V`O=y%#4Ӽ8pp@ "{ǃI0/F}Ɋ؄04@H K׹5aPHT.;cD==SAGTB>f9XV)Th'sI3 E=yTqdQ//|L"&N%7&ʬbN ޖq.[!#eTo7ܧu#ۮofa۞Н. Tjql0P/L~u^H$# d΅$`89s&nj0X*7ޡci)U:"D+g'̤֌IPܡR% goY߹>f{GLZ[v*wgJ"O.Us(9r#4T.XLe7p@2{H؞OT(\9&ѫ3O8+³ (&{TGv;r>I&ۏwftcY{_e/D029ygᶭ|{!똳x.gp˴!&b{D@FҹQJqdߢA(D)W$D6:/.bed)8 e6kX»C_gy;G8V(M7U8,/\W n<9ߡfٮcl-Ukձ o}͹Bm>Eխ^Kݚ]^1n=\l]asjzw8'=6f!eՌzzm&m7Zy)KẹA|k֗_qq<<{w#Vy=4Wj5&>n\{3M+qKP/m+mff U˷S0kLŋ,^4/(o tm.#m/t@lZ*FդD6 9kϒkeDm_AmR CRPRP>qoc@Ғ䒩J:g  Փ1qa++xeŪE_F5Ugp֭/MJ,`%KtH*4)!A; ǃjERR fB4bXn1/0+q£MhVSaW(u4(c,Ҁا(b?& wNw+#neն[k(4xDobRR#hehʽR0t&eFbhЂzSLgЛV[$#ZGiA(O(HP(` -O˓ẃq7m-ǛzK)L e. w)H\JX @&Bb|(UzZlWQ8d0OGҞpH.d!Tˆgn=O44܌kjI(Ne4[ͱ@ u$JLA6bq)M ^h(YⓌ9sc}'}. ` FPھ\{NhԿv~2D/x>+Mg\eo76v:Z]ִQb+bWhXg."d4^};8|!@-৫Q|w +d)U\Q H^b4e-j7ӋL>i9--ٽrY)P&y=k2hfeB}D:;=^k-6j.܏l{[ [ 4f_eZ\CMr! TM)Gnr]]: W̙`"wԃy7]F2X~ȧ7_ ˬ5+u^^Bg<}qiqϵb,ӾmTXZ i4kaЂ׽9mnrٽyq©|vSUlߡvjx2b8kz|l̛[ ЀҩBҳF8[_OqU.w Hi3Έu81kLEA+"WDPY2_X9P(+|M>uP oSĤ O4Es7`ցDK&ʔԽzS5[< trvfIS%.Z)q{HlRMQo0-ܓHdɊГM^l"$8^ʅx!Ję?0*UBY_yJU3ig]lD?U3|'%v-s1>X)8g%gw&. Zgܸ" m>dq`H`weO{b44~p[yxh(Kc5]ͮ*~u,dZZD=t6)K,@(hӵg˾=_B#]YoR y4%p<}:*gcC NfH_wGCIDd,=`)674>`K.-5`U([`Q (mݮ>!'N+-Ē9>8;Ulylr< 9N2ژ XKq0"p2yh&6vD66ɱV-!lYo2{[* с-CNպh@ն TN%X!jӀ# hϸ8pGY hmK5G[P@x~ DO%F;jcM gmAN:mDCB KZ$Q*$BȭV ݺz(3AEϹdw 97$b(1%LQGBA@X9VjB=w~$69@"!\N2f9%VBq򵽱XcP=py(L3-njX9QT֒Ag)r!"pWS6! t=k>ukɣo ǖGt1CWt=s"QSG-^'y_9+?ht}rm3{ /?1g_LOyUh*' *ϧ~!gV03Ð}}z^[?v\OjZ[cvF|ò??}'?[kA-]&yv5M.0v+Wߠػsri3A˓bˋ;4E0E.2J zMVa#2 4 = 4V Wb.vz=ڗHH YKڹqȢ9Sk6zdU;М˦1X{(D:2^.RO18 Ǥǂ&9 cflQnC:MvXm!W[Rԁ{nwpWev͓55%_$}(PCc&3\=tߌhIW-(K]w@6\[$&/XY2P<1g5Hq+''$=$إvGP8Q >FڵPos]ʣt9>zjmyeLucZq#? QuhE|߮r`>̓8azȃ=.463WǏSݗo$w~{]ѻh W\e={j|}p rN`ڂL8B`,uc6.=: R똎y pcr u(co;v, ( "w,bi"`3TktU[:tL^[cTމK2t)BCQ qarG㬵 5wCUrpD4e}Fz:~,Ho1m7Q+|}!~54~|Stqε(0KVA4>nO]qo‚kxj1$K׭VpJmU9bB $#_gɥV"c@rN )BEw:fle XxB Nէ' -BdEMTBP?f_>kI)dLۼIL,uAU34D)e/J+D&[EgJo&pyR+n'm';tw&k6kGI'g] tpYvj9.Us뤡WޢY .1Lɺ cHœ,1h9FeІHF'"TB6(Q8M.V9a)mNF 9֞8[nlUf/ԍ}}^}Jzo״hwNN?O"gɟ`~8Og߸ǎ&AI!X*k `8>^-F2!W"1{M=BU{>IN(I*Yݦ26.BGW$ Δ 'c7aƣt#<cWm{m{n>XMJm9GPlA2t2RH6RQ9Pd&S>%cR5",w,NǍ:M1h!1@c[ui$:Mz&dy Ⱥ>+[vCLj.A}&9ZIжytof́:Pe,p8@YR7y>ٷT0`]L)xg0ɝ%E `WHTQۊWFkx_}ھ<ڶ֨]_oJ{sցJ#4"dUH<:Ib# 9f]mX4A*+6$M,^iy#ۀ4c{bw}1㆛\uHFkdRi)(rZhG VJ.µR)r>'^+ Lbt< "!X-\ݭ}y#nyoS%7}nk|c^NV 煓Zϑࣇ[OqBKw2݀֞J7*;n$}7 8}ÈݸgOCv9,d -^A\% )D_ݞe-x9xyS,ڀKbD'/LwƗ]{v(wBwتἫyc缫3伭CܞT^Gjh#X!2uK"CEu41ji}eAñ0;ҞY3"r(K,5` l)k%QYQ:Ֆ~&(w)Zb"9CO(1X1H5MzO!J\߿SRlSuö{ lm^@~=U Ǘv-Wɫ$.N桫D_".L<6 1SS $ko$|ߣ]%瓤>?>IObZ5ϮXt:XY1Nas:C^\<5wx/$y7:H2Yq⯋騬\-^]l󋋜λt*tLegm35ue:c>>{W&|_ ^kĭ\֖nW)ZȍMd*|zwH6:߱Q,^FfћOO9>p՘el˝ m3_~b[g>L [iaG׬C:U;[8'^oF].xj.o9tɷ s=k{GS(a7tyCRjgVvifԀw3-:.2HtjtN2qNݞG)5/d9 ]b_;>Oi*p`LkP69 Ȣ-A&ʘ|9x,Mw췾 6gMI/gqQqpAN_$J[cS1GBcyHC>TaqAk2 M"f,G[YE{/1\I paC2֩--pz#يgĶ9˝>Ý;/WE`z(Z03`8R x^ cRsH~'.iCtQ^y'hѻ2A=?OLQ9l'ܢ Y(rsZk,!9:s#O V;{Ӎώbz6^<6͓&%MVIAPI#d<]$w)j0x㳗zkad:OQJ'}$'11QC=x$œ륾3_BAX/B 2-80tQz.>^Bn;}<5~g\["Ytr6 s=J&!LuQ_Y(ykWۨ{HOID (e `!$4 ɸBxB6:L.o~&FѸ!qBt\®̝;&:j3Ī_)A/n\%u:Iq󚜍NO7髏l+[|!݊ۗGsQtٿfƵ~]m]bԇ#uL^_8i^ \!m&DQ7Y,W8""?*TKشHФK u^]u P<;^,K ]Ҩ6daIR9kGC$(^'Jp wnն7nX~5 )!.y]'_cxG+?PԆΣeO `r6>E_bAB (13 ,C2 y|Gn**v#A c֡\ ,iPĒ3^AD=&o2);N!.d/ riO%.RYث*xGpOJLL/k!Z9c9[&Xh\qɤP2tR2ΛKn~+I뇞&X.觴,捭.L,22;!-B(/lo"@TI9Qpb|¢ߥm{4yIZ-3l'c KEv[IHUW;AܧcPaU+͸%ZX-/PEt6\Gi Ȁ` R1ǐXL>`uPj _*G5m#H,LBB ,b$m&A2i/ Eϵf9a/H M^V[BID45瞀t6ǘ!a%k׿ZoZl:N(,Ͽ.N{|Ká)@Gx8;bVKg菦XiHT۳ҩcr0Yp||0oi\ʝvRLqG@9!aDhFJ.qO&$MH c۶ogX8+#|b`ȬKxN 2öcRgM47~KЬTi!OKթoIJM<n/҂%nՋִv`Ob~^qD[H_!>.Sۻu/k9iix>=;j/Xg 9JX ڰ %xq䇋=Ng3i&tii48*M%k8jth3zsp28|s̗ v4{%@/f42&v4nQISj4;HѠZ_f\?/_}S~}~_>!/߾zN?hưnMO#݂ODWS|SLvuyX|ZjƠbomHʃχ`ś|ּ(/[O*A0Vq6VuQ!. Ϩ4RIlT# q3 Y)5BE43VϾT;)ȭ1gfɂ=Se\2bo'\ՍKs  &Ne-)G"B ,rNWu:yS Wwvnt;;ﵳX ^;nxUøv<]{~?Ё+OZCmՊALX+ esVIyiYu.3-1ybQ@Fbd^0Y"U@bGI!h_nuah<l/Ue]|r߾_2TrmL,d:Y%LYT88r7a }nEoAvi{7Ԇ.'V홵(v78̾&aVM+SӮ"kYf~,(譚X5p_V ܷUflBQk`I*mR8 Y8dBC8QCHjͅڣ"Z-@VdExM y,B_Xc]ȹ]Bn7t-]ymώ=_9o ِŷ>}oQ'hh%t=t2H.",dK 2I"ctȀsּAw\DxrsE%;\+e$'@"`b$hU?2ÄVFx ])pd_.j?#Rʔ۳m;u+taԷI=ԋGx >( e`Q<"K&JRiN2 c")a)@`spa }ʻC!;TV?ғPedc %x-zr<&Eg+__vbkQn) ,ʲt\eW9,y8żBySѸ) Vm_ OϞ.Ϟ+4R #ؘ;'͈INt;rQ n}3:k [Q)\32$QVfi,9k6 ,Lg7ғ֮ YPWbHэԶNBۮm}g>"/y --0\)zyU}Wofo?k}mˏM5oYNLՋ7WT^E}/^,<ݫtl7q7~T0X)v,J)t6 LL5j$GeS]"K@jTlEkJj$,]FLȭ*I@VgAvbEW (qIgɑ@+"JNZcmg\oKWE_ 3_vyx& 4[)~Zb$!ZmbJ)54aZee_ۙքj<> e,c,&׈,EJpgFdVJOd='̝ʙgK+081rDU2QQdHCdudrcv}k9댜-l"!`TjޕXmG'Pgm%H(6$ph"f7 R$hzT[9%UX˔ hǐ)}Fb"e-%D {³NKv5!IH97I gXD4B2LL"/{̔C8G'ct$!f٦ -Đ*x$,$5p`ichB%OOD:<!J*rIz+PV"dnSl+eYȼ9:d$x4),2ΙiۋGwc5NwE::㐛JGE4Cl菟p@ԥnmǐd]n=oHfZx˼u; ;rIѴGb"Zu1ȬK0.)7ܭK!ӻG!BXZc$T.GVqc mp9RRx]YoI+fvFfdf~`Y;h)iL(_=}PIZnKb1Xq2MhkRs{[ Bۜ.A\ ”YaJYC"^ekcctދRB9p+8i9)32-ګ.]l몭1p :p"pvفNa@PU Ed3;?ɷ0`W€k*€+[?C9 )@ * U[&\ڸB]b{[b,k'r?ֈ&؈ސq >yS#dBM  wIA)(b8!XrLl9Vr 85] Mq(d]Rl@ZxE9%T rjsZQlXv<`Rf1I*o SǘQXQ^ h 7lNlbu_ .~2;=ۙ<^ ky?Hghq3c`nK9}a9 9RD;Ba--ހ/;1,B(e*\rHNW jfךgbwOt _>- yrqіQ˨䍱|rEp g'$q{C"BXc#Iz IZ>ew}u|Q\ttv]û9ץ2'cY=wzyF`X7n6b=ʷll/Zs \m*};*͗e r ז/,b%?~{F?=gtI;ֳ8u)4ǹZD+\e~g]hewuJn\2zN=jK;1J%1M%Z/ &<G}h-H6PImdXVDŽ:B#18R_v+ šL%r9RN8b`(&r\ĦkdBVa&o<jIَ ӻm7Y +HzR=^OJ)X*Xx'ۙ<)wk>?c>Ⱥ҇O9!sAŊ5>}vDT€ʀBk\5xɅ//XJN-/Q0J^1W8T|=%8O©o!Nsi+M?EDꌈ4"∈i|a9mf%F0x5DZ \\ȱf"%%*J9|Ӎ69&o $FoL-3"vgNGEu\ƵKnZ..vnaxgPhwuf62SgG$ΡQF\.̤^ڱ&bx'nsvd\YX#W8; ʘ;AQ,rWHK%ksl*\6ȡ|M_~:=Fȫ]stPQ3ȒƸI1q3i7Uy_^wǔ6>Ɛo-Tߗ4jdPj=mRj\BBE]N%)@Y(%˒&ojԮ5yK7mw_~`:=EѠ "Ei̬3&Њ !z&9q.ˑSCF=Wנ'ɠ_ٓyЎ.̖\q]r떶w!:M"J:< 9;8Zq^LQENk*&Maft)WkRL{4@bqVwZ6wy nfSq|{Q>ANnTY%d4S5 c5vA(zZ3sb:༜}:ٙ^:k==xv}==:][rDV"[찜 _{\&:\`v\CI#q x[޵;~kqnKnl}'n>N&{Db;2-1-Zaw¬iG'r)J6RV(k]X(XGJ:=_m(",UˬrS)OڊEkc5dL a Y*WMK[n-hclHCɪZۈ,bu6i ?;h/ %19AB Z1uy19'aEϙ}1R-QL`lw9<nt_Vזd8iV hg#XlYu !"_ɧy{vȘZϘuxY'=1/iyIpٔhצӫK>Q]o}Ѵ)h1U7ӦeTOjRMiSZYnJ6 WMZw#\J2+{W")"ӾHku)ݍ%vWWU a{zyy{pNM-B7maoE~,3~UĦ 6r0bփ 1Sww?'I>zNZ˗/Ծ1 nrȊ^lO?Y9+:!@F[G6C@,+]՘m1ܪ"Igx-G\>If'W7y=/q_odx}ӛծNj gA^~| 846ɭhw/֫J; @MD 戞Uh\41PJ&fL 'oRn0Ѽ_ Et.Z Jɉv/ on\oEg'C:_Nÿ?k}'gGqzj~p>2@{y_va>2*Y._V=Zސ&1BȻNF ?XW-ҌUp|XP x \KYW[)^\$TUL+`6kBtaH߬ET 9(>&񌮹ydu=;su'?&ΌڏXL>HF^8֊QX劅TDh|bU(BQu1b88@Ĥ.cxK/0ҤƟLxiJS"`)Og/ N G;0;}w]{o*svli6EQ[`[֍,[!gFmcGM 5C;1FkpW0_'+-LRU*t*,Bߟ?Ԟݩ_fQ7 ۿCeߥ_dϟ:D׌K[2c "-aZEZ"Gb6I\U2>(v<[Nާk_F_.`| #iWۑʳ%Wvvғ|6M` C4꺳Q쀟KxX KpnY2 :Y m/^esfEi2927# >$mhy}V\6~e@Euzc744ɡfU(_MPn<}voұaBS-}۰҇GWo MMjR".[q`Zwf7w  g}G&{[~Vٶsޥ'"'6MT^y EÆ#~Ѐ|b)UScEj< m }T6m`w*?/b)N6W>l6$MO=W{?;TwY5' -4ej!n7|\htVQ^iC6pj3v#ك!ļICqbXi{N̋rFz<MO#U,d|UUa//_G D;`YQjm v۬m qG^$;ͪ^:͖]| 'ĻtXHoӑ-t(nlӼ6/h^VOכjhk;WƗX^SFMqWCd,Ii(yg z;Y&{;/'l)4Zy%MVu(7W)g TLS@¿|dw-dy;!֡ڨ:7v\O5o$ _ɚ jjm^s/ A6-&PDUU5Zܦ~5K:DDJ~W(+  T[-Uy 8/c qPsZ홱E{o<~㤓3S` `*LEuS0I9uSn^/^Z.^UήmmƢ-﹦^]DK#./kMEЄC.NtA2fŠNnIJΔz"!N3 mFW2ᜈԖ9V9|H1Ԟǐ{CQJY BR9n0G6JH7eLȾ.f֭B=(ǼT< =BS4qR3ۀ|Db% V%w(?%دN?iFRHIisQ$Rab)",`(":o0(w`~i7!{ 5K.h JG¤ N / _Wna(v~~44pago%9[rb.1L(FJ֣'IPڒs~KS[%"vVޖާJQV $ 0BGB J1uG•USPMY[  85iatH\o`B<+AGٓ/u"q6&[ᦸ6y#if fyi/D'' pRK9#8_'9?Rr"iq꿷AhTn,!HŅ\_r_zWūsL_~?`pEe&@z]3W_5Ulyn&|~UC>$&'l DY#Šrɚ[˷ko~j?!]/|&DBfPiN*.60wCzw%6 9 ?}]NcIJ`\H(ɵE6D*j.$䤧W8[^}R:žc7:42z{d&!bF oYfOa/~'Ne]|s^Ev^Tzj늼?] a2(E ɈR110o>ޝf4Y:\7Nyf%AHB͵ǣ7 x sZpВdKk,)QV8d>H5jSI5dRM2ǽ-JyX@)x='\Grur 8M(rneZGEdPX+\$JOƾQolFww}BH; ځ7>:f h@P{f<}>D\w @# ,Нބ}# >,Y"Y۬C!Yߚ`#{\]pGE^yMv a3)("&J'KJ9Z+B$X 3>G"`"RSFDD b FрG m)c"=#eb+'o>CnPm;U3t+K3^u ?_'O2fE``jRƑSIm ɔ0[~9ịUր`1ZleF͝QFGKIR9jCgg}|<˾s^寿w` (;d,T+Ǒ,9R%_կo{bcJx(')SUv`8 9>PErܑGK)IhX#xcFYs2 Z:65,!D!e!haj3jrF佖$ZMPHVH{7qgoc p@p `)KȤ{bp6ws`IIh[ !tP `7g,#ܒ^ЉpT`TϪq5qrM.Po Ns^tv}11>3Ge\R!pf#o~dZg[ *V+~0cU8+iZ/6~1m O'9hjU-FB|g~ xVO8LӃɽ\у*; $Hg%Jd Z*2LG-2σuѝHG |U{2'Mou;L3__ۢ}Y8Yx&Av8$jo6_ ֩ v0yUE9rB~KC_5߲lz;tcθ;D%J}KymX93OJ dRRc^Ru潌vvu{E>rk-/\*tsfoaC/x_pcxEFWy4UN޲mb5qv:hBWY񶤇9h> SB6.{&ɓ '9UwkN_x>VNU?K)_ta*Ro`O<`G'10R^$<&9wox>.ߎYq *U_d@YNSn ?Y;n3KNj׳=<2Ãx`߳VqhSn-zAԓSJ%L Km+)eSC5HԊ푩;_)bbg,qA K 5&3bUJFZ ya`"ZMI! 13ˌî]߶#v$H' ΗAl:R-b|= <_&棲Ó4ʌ!D륊2 F%HIpsHrr颇(AH疀j&W}z|cPti(&]rXICRǤXwXS ` JBBDHD"#8Q\UR37r(wlB}yՑd= HH12h Pxm Nz5BRgz8_!/]]}9 'O 諬 MɤKo.uPv5U_US]Z!{8o'餉&3AFԹ8*;C D'+FKT 6TՊgOE8Zz-M "J* F*"$FY`c iFh*L'^H*~2]V8 )|MXBRhH* 3FTA;I8FDFDXZ7^ Ȗ'U[[d}yD1!E(O.&! |9L" )Eͣy:uLChlu7$V/Yr=q cnܞr4Y'pQJ?)H; RDϏj䘒,a))v@))vLI(Fp UJAZd!HrBՑO07Y{/%6E䡺騱u%vT%0̔ 5̔8˙~zdz˽xk\b+~Kٹezn2:Rt"OD3hī RҽKMl-A@Ygs0Q>[ )dPvHID@6r_!+PovYtFȂd/[cmTp kW9 ^xKI} ӠsJ]-/ la0Qʑ 90OØ ",b)^$<%^ yٽyd~{WA}+oa[pF݄ݦ€ }mN]A#:9ujeaNtWW3M )嶕," (l}O_b0X`#%kE$Q-;2FSIch]hcCb{[l )BYDitql#Q;6!%1.A&Bkdۭ&8'#.{w6腐-yN")6ޥsDLXjhBbI̘lv9R@90r*R`ZI-bZe1=:%I̋F } h*lY)2DkRA5et4ҿ&Du'ѷRѿݥPrwgܞs2$q ΘҢztŘ.d`3Tc3tc ݍ3@rt1רxΥ\*FȠl-V=FՔd}f/ Bz5Hb JVy ).4&5ĹGhw_] ~}qQl(޼s/fN_?=fwqўؑbg(1 m2U -h;N[$JhECWTVh:B]M_L&ތ$)$is۪, fߜ5XL}k'ԥ- *>7vBKe v:K!ZFjfػ`pvSd+YڿnE)JKiEYKNzM!VQy) $w,}Jzd,<ۀV}Jgb#ZGSF!Ϳ]EzC 9EE6!;P`E@Rl-P̦8]34dW^E36Th@`R@.΋h%JSLKf_!|6ӓ-s>u[g{{ߜgk+_͓Ņja4@Yc( R1I|hu RyD(KJ%Q#ֹTfX B-LN𘓕V{) bh͸J3,lbifbZlgo豱<هߊyAӓ G$h%QE+Ye3W 1k< I fIJ b,$To'lfY8#v8+.`JҎmQ6=2;f) 4"%Xr dž&Кl#Z-qM>z`a$XVe_đ1IIѷn;Lwfqc[Dtэ8"] BN2IJ&#}t"@Gm5˶NN a7dSZT8RV.F(!ƈLlmy[אbʹd[\l]ӈ#.YJ",MRdr%{ee"6& uC!Mdb]`ZҎ-;au;I o7\|_O >*Z]ZeF'A> n~WyXoa|| ~O@R]z_oxͲqÛ? 1dI;?~Y$qg_~ۓEv+^Ii2c7]YY-ZemYO}fߛ7?@'Tp!fK'nv0XzX~B.>Jc .=,?/=^|D],q9;K}_0HO4) {fU턵q|t~qy>YQ^!Id틥G V?Y%R{κcLZۤcnw7o+M[)6̓·ݺCmiTNւv7ra8NH9"YAet. zM9ޗC|bh{BҥU]{(qodZq  "0 :RA( 4P5׀6ޥz|;:\v7^"ceOָ.vYrINS\ !ul)2;>AtZĞ̭dy K"1d\ %e `CC!w ڤ m;MYG=~SӫEyv*|GE_]h?#$V*9g)* PzNAI(H-Uң61E>;?L|+Z{n7q_6uNמ. r%Y͎Y;˥":uȧ)AK WH/uEZu>X5{jn,}#n?O+}u֞(34cfU'v u3Tb,).)Ff(#z`{o5b;o%Kډ +l!L2lJ%(MFbfrE(e,Jk&(Ƅ" H[X2Hrα! QhQ蛅^Ƣ8kBz m]M}/'့Z/_H1D1Q*j~e]i{KdڝN,w2l|<"gs)F.h:v\R`/ҥݎ/mF@ЭdiO@kV"Gp|)S=fh(w Ēyo7f|)!Uv?j ~]_nPR%ig=^ƨlѨK6="JC/(f|&̱(65b %6H2{IfaCF :Ҫx>"sY|QK{X݀و팹ZNY74~=f_6i;t5m;]x^YzGa~wEI[`Q[Jw(^oJ cXoS\iw-i,,o)ׄjٽ~rˮOz0\9=pCG<\^neR ~_| '|ǧ*yKs/:]uC_TJ,+..c& tO^E[,bPpPh5VSJ-fY"9d.y 3&G Ѻ}w}I6e1>T!z69\P%Q>eQaA sP|CnܭUwwʠsei PVEA-kt2Ai2v [Q6BEq48Q6GئGwq(%A* Iـ6 皋V0Ǩ8v-Ws6$'bŸ8<8fs||ڀ݃#h[1m2 >KǷB{Gٺ'@bŵNi2lLޚquOeB,^btfA "QΠ9 |ۥj=!88xhvCm(F<3PsVGd=u-1;2Wolr/ü||W<q6¢/خ{ㄎ! 0 iHz HX;B-z?=?}ՉVG{@XSb7whl.s. r|m[j/geTGXx7gjg+jsŜ6?SI%%˯1G0E hz^$bfWm}*L֙DHړ+w^D/r|\X4هOi;9ly\ؐ-6q<np+;Wm>Ss6O+^54[ KO"!d& S3(޲v^ɑv#URTxK99P*)V\bMZ;HUTo؍-u*ta7UκMpvog o 9/EZ|^䋜yfӏ 9*IFggmUA>㤫3EV%SA<1{]5@bDskvZ)@|cv`ZhoԦgPcI6 %uֈ٢-˫&ыB)ٍKvՋ-Nzqҋw6`P+&;X V+ F*%_k,>FqvMLYդfh/QBx'ؿn*r2Ԇqgn[ ?enbݭUicIpSe?+qnV%ӓ>i1} A䠵hJ{O#}{ /%JsJyx??y0|j {1&ӮWT8U(PWmЛScQ>RUMIWa1Њcʼॻl_\O~{:=Ed'" lJECemh% ljis\f_s$Ϝ컅'akO.@;0OXT{."~rdU[nN[\A1wCrUsT!F<-Ci'Z-ilCu]\Bph !%bO*%C65@ҚnbrB: pYwxo@jr  :z2l l*}r\;:LQNFZ "iEMv?^D#L:-jh)<‰l2u1ǫa'-yX2=S:x ˍr% !Gk4Y`\툅hV ( P{ebQ(AH! =чL"#qc#jOyYr:[썜xFWڻFu{̒l̙;V0e4zI-nECmYW.jjSLДRS(c!3 :xjDF>"dM/閴:̈́DDE^0T9q Rp\@BV;~5ӯ6FRO68d$ҌoH1`k 3 |M%L#hLjܮ\;! Zʹ OoJ에[x\7#j6=M9f۪} n^ᆘpE@T*_EUM%$6d *J"^]eK-Z{=Bd"a=Wqi7Wd\N̓UfzGmE>I-;TK,IK?=&kݒ-^"ce<\Yw+.]Vk6w{}kC h%v l]?.֛t׍YvDջ{NԽj0L6<\\{9.{SYYw~Ouڔk%Wf}3mLi6i;f[$<-0KX)\|`wShcy,P]kYR ʹv7w. u[1uخ\?5=RjaPʃJ r`[.ic. I \N`I:, d M'쩊F9Iyq698#^o_+EZ7aFL7N_N[Ҟ༮o-Z+_|[sWROind dtumWi{mR ݶ~kf +_a[##imQ%>ovc5/鲧PLRk]Ϟ3|H2vAmw.} z:yE7>NS7;a2R_)i_sk^BȘP\;/WaafwLBz+B-;i.4Xo wLR}ӄ,P1_ΰMS{ϪmRG6ez;5]|̲g"l,7Zn>la,R)jlPD(SFm477:<϶ϳIisIV$DC'DT5=XVͲgG@Q@6 ,ttIδr8rYQ٢~kNJ`=u@H.@ɵdt|vwQ0Nąn0J x=t W WE[_)?_Z V Fs# l_75)U718#$NOP9B n̩n=gZ~.|{5??x clA̅c8Ny.v@[mU㳗nXQr$.چam"I0 2bI)r1:dPSc}tQFK8Dc$TbzTAOz|C#{>ѡ{ OQݐ^TQ񺶊J$q?&pJýN/:{۞1~xeCnwkg(-^;&vӸ[ԩNbVk&N~0V | ʟ/U9ɋg9k7{sp)rƠdz0W&9geAT r ?èb6uFX)$%e&Y2P|V7Ym_jׄ#uD)lMoⴷ#|"//M-"R64şz>Z~Q݃Mp м|ߙ_Mdzx2x*(*YܷԨeѣƪ^uCt~gW/-pr;߃aLRH}aP9,2CMuIry=xfDŽhAtƄ d 8a%SĄ(R1xNxop{%et_~z[ ɠ Y][X6xP{Q>^?T9fP$)=WZ0c4()i C~V6EWϦD6y&_08i5EUT;=c#%w4-y=p].Yb)bi:+' A.$繐B{ IE.$*D!,DLP;s"R"%,iL+j ^"s^L DLKN&掾xם-:ETn+hwvq+x*"Enr⃠?m”?NԊ\;޵\r~ޞ/{`8.eT8NN{pcR&ܷ7wXu~/Tqo/E|eZ엯bo/-y~^_->mZS<]SpuI`LOa1kvSGtNnmٳ0O7>mY n￝˛#9u{:Ylˇ@0+*?1ݏ-08x\蛏登ϙ'?3>b_Ϳo꼂|We}_d%,ֳgZs^ttZjP,1ҩH5ʈ)Vʁ+4|_~G8 >.~~⪷~ݗZnwr9G?2UFl]GI|K"XJwT7\:̗\׬$BR!\΋KiGx$b=˟$>BР#>\C@8Ø{#k)!Xd+ VU-Ҽ oܧ{ZY0J0bьfiP(B6)1Ej0 k W޾KcV|i#r!$8) V0ÜDh{n9 - &3qx 9FC߄R-VZrv1kO&3{v[Ro[L-YF mP4gq}@4p>{*e`@&FBv 4f49E9Ju~ﴏ,YD >sƏ|cyskm&͕|)V%j)l5I=xR{pnn@sVUJѷ\sI9בJjv## 7J&y:x2' n؍)YI͚1Ǒ֑=~F_{2dT5T>źX* N =XZ hoD^b eRVCn]FyԬ*l9%E.UuAgH!!s'EL#By@]NȗI6*A}T[ `᢬QR^ T9Bxiз;Bk((g. \bUbmv%N %P 2Bsk`mb1זu4 ]˘ ]8nhJUˡk`T) RN$!)7f#akjCEZUR-9V Gn (}'p ZU\WH%rb؛lXQ&t$1*d}#UBť)R}wi]-pi o(j,:H&[RPH u[ ? 5FnU{Cݥ! w8c)(|+ 0} 0Ȃ CdEKI 8xDfU*um.1sMb*wNLppg-*m+Srl 3_`qe*spHq92rVj ڳ<[ Q 6 }] 䃬XSr]ޫ)q>ʑ9 0Jbn5ΰ_.c @uEJ)6  1P`gݞҰ7=%XQ>v:1Mndd;}& \\Słt\1lHNoVl=:ͭ&`adSGÄm:/~˚|^ՂW0Bl3?Iu]w#) 5P 3 fG.\X .}d`\,]ͱ @mVXg OPhD iXI(xbQBE򠷒a$RQd"5F C slXNr`t/ArBaYr wx$ ;" cxSbQե:Yة׏]G}^wCbމ8&[WF8NCo sm?]QS]\+`%V 17Ҍwpr1 i>>vQAʥjuj +Yсim\;6, _-B ]d?,` ɗA:"BZr &EюeEJr3vשNU;+GϊsJXZ#e?S+X' +F5byPI @$B5%i\4O`2?FLO n>›ZAK9*|ZJ,b$֟ťp[V2B.g   i]*=5a)x!2 =حd~{% .4_isҘ 9$bRQ" 0eb12 6+!"I~%<'.$, ^c,uU~5 HXީG8ØTsw^Xj.Jqrٖvamiɮ<.; AgJӷr} .FYn I RǏ߼7XML^-)ŶnLf9t+Na>z?KZ.lfb3|t6~Wܶ1^7fs^^ra/SI%,+\l~l-mޕլpxϛf~zRWǶ䇲 cwͤaR $~'pwagnO{yMz8 X5s@RFMDq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@4RHmMnX?÷O\%jPCPǗJGq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@c`n&Qr: 4F`%"8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "hD2iz8 mpa05jt81r@9"8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "h, &VmQw尴~WN7S)ybss&l1/d=Ul >N-J[YK B1ڕr[V])gSȕbWV 7t>FҞKf++L5vr!G}xTzQڕ<u>YE] ^/qsY 8QN$?'*|w[ηۡp\ ~R=gQ6)!jaP/]\G]9 |d;=]|at~æ~y^(Uۨ|uM+ws)3of֗SsUk*ƺb0Wfu=o;y\ ^6E:j(ʃfG 1>â$CFIojs" >$}Nd@{koY# 4>6'}jE+% 5լhP3C_рJhF9T߁YfgޖjMɋ7уfT`ӄ/ֈe8r-jQӳ-[x|kiR,G-+>`B4[OC鍽{q;ᴒPլ@guj*RtZgu%j*3G$Y-vj-C+Tɮ2zs$~;l]YO;s!T/ʃj ®L2dWz.Q"BgTSŮPS3<dW+!a)j rQ]ZnWrhudWbWR1odEv%gWcW(WU]Z; jv`5eW(j  >9rx]JnȮFhWZUEv}EA˫+T+ U*Fv5B2qo++,t=vr-ŮPC+P)#]Y-ij%Ʈ@2Zwe>ȮFdW{jyJy tbD5>ذGznt%֔rFH4.>U-@yܝ*@%5*]c 7TdW(cW(V$BT,=ٕ}` |~:?9K;~rZw&зJ? gî,c~a]EvժBݮPWdW#+a{pSʕBZ ݮPdW#+iP"B^UcW W2Q]Z> U Kv5BRV{]SvaWcW(WZ Z9tBjvVdW(Xj j+ ݮ4qڕq\T]`ٻBbWv{WLTɟVqwG`nCUC|t@WĠ>q*CIqZ#j JBPBvx{V)('WSj%+Tix/ٰ=O/ZwOR^~vճrE5PV4dci5e VTfrU5T 5T*-m~?vy/U/F{K?>jgTKTzؕ#zlsoɮ@BbWءtjv%WdW(عj Y 3tBJ]Ю|u;_]\X-vj]JȮFhW9*`ͪ+kD-vj]JOv5F2q_SvgjVԚjve9ӞUdW ؘz zS]Z+DQLT'qyFIh^r[ f@09ͥtv?Vr  7 9Ur]5Ψ~I*%s ǽ=uj 'n3ߦ'8ǔm|۸tuě~ &@{f_YJQ4DhTÊJNˈ޶|>:o"2 |On/3S~?]MٿbeBO؅?Fz{ 'էZ_%^TPԝh'Rb'|iVO]Cp{}3i5kK[RtVGB`ڈJV'ja˽[}w>`wg+0.M..y.X_UN6ga%K&`RF㊔g aEQs*+9TўY{^pnm0ؽ:Lg;6jpf_zX@qK.2?n[oޖ> 밹ڷˬ}r(tS`}:_npj%=?r1?{Ƿb7U^~! kk#U1[”0R0gKԯ9ȸKQj՚ I8#Zi'F0KX0b-Ne#(%J91+ei2{2c=8 )gpPLJxኵIhh])dqv29gݸ;f].Z12N$j#-"`#%|xQN t4N: )3{Ü{lfX2uRBmg0Ldֹux%D&QuT+mNu~nw?ͫp/.q\ l?u5|gl1۹빻#M 7BiYb[ dR\6 _MY-/&{ \뫯{zyz஽RogB>6bQ )iυMV %liG;"fU5 #|>~|H,a!xJY[SZ2Mb"J)2Ӯe8'cMZ^8Y7zAK?~|b_ b8K M^-`,|[1YtZ<̱?&;,}if,6͇O`wmkXg@CWa/So%)˛|}ڻk馓Km| lWl&|Gb3_TM -msP?s;|GI'?y 4(~ZehPQ9FȹirNAz9iKR̀JR |9  V6JpL1UDʵ'mpwLx# .2&mM9(\[iE4HGHSͥ}!S&2L # ~z\u 4FѐƘiLY.cq6ec?ٻ6#W%cKEH$8f}jkIJ7!)2HٳX2YSSAC~09̑;K4}yz;!bg~ǰ/k@_*->xp *V ΖqYׅJa@ik\b HLג7BTiӇw0ytYL/0xk@!3nUbe߲}ʩm mܢ)1@{[d(&B9r%%HK&đ\ȌF3Z[7UGg g#Y;i$bCm # WṰ^jM5B|Nykb1 h}4( ;>3y6M8={rmӎA0#P+udFy1-di2FT.̥EV.Eм]#Oz@1&$K<2:qO -f6I:#1T(6T&5y<9d- Ka;=ƣ~ṙu?a]x0vezD🰍 Anq}zwɧqGMl' ]}E qQ[c]GjWu$i&ҧQ5 uPBV7ѩSiOD!F:fTn,>j@\s>C3U7!7k (!mϩ NթX]U}B -3* 8ں4T=9^g-}(# >s|SRЂ]7/q9& .V ^~KE!!*c"jCiiN4oSO@e5=3SV.Pwx>ҮStdz>x}a1wy{egHlNf+ن/rbC2`dĐIC*DGh̖6\iF-eτ"mMN)J[[e 2)r, &3xIs-= |&١Z5R iƾX*cXx+Qyuagl,.aǷ";2go/4~4| goed Jt, ,`yk4gyVB+d^$ˤ!O^UĖVfa(ΞOKRؔD AC2%ɷce13bY8#v=dJҎ}QTFmӣv`EiWB(7爗Xc &[2t2䙔 NH6 YicJ%š,j#a}C*1r&-_؉jj3޼[2 >h & ]6]OLbG?$fO-Ю`VW {l ez 2ހ>w;-6L^M'7jͿl6 vޡR_oKd&ʀ_vjq  L@rBkAsJ|0 /Y d)mǛ䕫k&xse#.GH0>^5 鮉ISVkJaH X3GˌG՟4*N=$d%wq w(@PS6db@kkʌ` [q4u02dqB͝7^/ʻ 2+jP^c(Q`^aBPRZ{&p/))9d" W'(_K'NxnO7W|vH;w#9]Z#&5;y̾%2kt|ʍ`07Uq]nz+>W_5k5h?ǏѾ?LHLKkxuĬ )'h1 Yhb̊(&4I蘽T! )N׈(",k'fLc 0d& E-,2y<&Gݬ!7rV] 2o9n֔ؗ΋uQWz4&=<4x`fiVWd\.MuqFLOcJ&H66ZgnS݁'^[kD@>-wAa!C(7g;19 罶^~QYTO?> h>Qj=%{z,ld;E,ȌʒG䢐\D<: 8ǘmT5bnJtL fx*{`>lZyA6 \A#-cm/8-]Ji%;!cVl?c.qSaWMF44jWmߵwz{Cwn~KwVsӦ.?~5L-ιm<*l XlWQǨM4S$"ӄSLg&wR@*1S``K1jeu=X}R⁃eYTY1$ސQ`\A/4s&| Q(19BL%kò{ER-d-4=O4锕Y[/z6=Mj*B8r,Y*vv}u> (#5nּPT 9]ʖȂ]sđ,)6?Gl7L`9,bwˣ7 ͈̥}EZڄM=fCKQReѾ ?_oI=u[ƅQ|JHlj)t%p%E+.xKrW)=-1b,FTzE>bn+ϥ;:$In)?[zk'PF__JvPWNȭ&d"]s[K6ҟۼvCj>i,Zos<{6o#:m-&KmݽmUwmbt2õyX5\vH:w)~lV㯚n|-wm^2wkgQF}TD4Y[}5 FaԢfw5C!C $RT۳\.d%Tċ/}]X}0nMƽַ 򾙻頍|t6Z=~*I9M*ϦZ~@";,) D'ͮ'7GO`  C 4oiTqs4` Hso R'DwSAaCڬ# 'TBp*H*JFi#SQ@c1rĔ$xrNGEj!4]țpce:NF\bޜ'uyJyJ$D1 BM2{3<$Y$ir4 b*;'9I;I9ImRN"48n0G6x"JSI9aڻoj1 QiW!cFY; ZDl ha[2"93/hYĭ &/\T[jϝK}n:~t]@:Vz[5b:yUnRV3, [pqaQTx/_ҳW^$fv+a0_:(_FL;f\}^h5bE;Jnt0uQ Rk)bg!Ͽw!wA_@6٪Zz[p\nyQv[[߰0 hYVY~Qeegs{7W0x ߙ/x?=eB\dSn3x_a8 @a9k!Zz(F@wwP h;aYR*wSkN_vKfL^.4&9Hq4A[h1rE؄ A2 8v)<( [/5^#jVkaܜ6H7d/!"lE?,*}aU(r15#*@EZBTEt*(k} gNw ^oޛCKs>X7mK> G@4:&(fEwZI%``!AyZ/߼6ܶ뱘PI1PJytF ݹc^*Ju`)Cm@>hg"[fVyC7dOY~[& I!-8-v.jI԰X &ʲr7sR i77R{ 5\j+0q`(U2z 68 R!on]C;C6 ' =|8 Y+(Iea:O'a>g4!q&XU'Gp],=L?˟G 'O8nv E]($SO; t aXBWB J1uf2 ۿ>L_%P0 fVtb !j)믡Cz^Y+wid|Ɇcq}ήLA$U/AϾUW0%VQNRsFp Ky6z i+-nFb^q [H߈Ra]vMYz ?c'Ϧ/`, ̧0{eR}ӷK$0Ն?/^^sa!&M5kkb|uMuՐu0j'[X> GG]U2Ѱ=MAoq >+k%Z\W뚶4_:IaF N>]*z6Yz+7x 奇B̉?zqzW0^Uޞc_az`l P7pyU3RWP߼j&U:[]u&:GF*ڞ[]Cco̶-<fy]ҭϿq;$sUAEs_Pb7A|?o mi1leUOcIJ`\H(ɵE4m8DUs!]Hg*I0X=t0'`P7:Ox*JԂJ$2>z•pX=yaxywS˽Nxj;,wǃvtvFNYcdxpXY>!Ӥz{d:DoHb[Y\\3px,×kd0wY]+HܼG"`"RSFDD b FрGm)c"5Kel˯ƣO_g(kY[ lzYQE=?k|XƬ@ޢ  [@Q8r"q"/%q+ịUրb1ZleF͝QFGKIwXT!S ΗWJ9+Xhw.v}w/]SJzEY>f Z(;0'~73\~S8W2: {ua4 :+aha*2KA ;e+r/Z_s'œR3Ey O`w1j)gd7pZ VZ3oa8BS*|h 9 Ƙct|Տ[LW0\Q3n(Ky$"kY"M{zq("tDa5([[ɔ#E2=*Q+Zc,ЪƟ$u2Z2$t1n]%5ƒ -kQB;z o8IIp`Jvp_oV=uv(- wtЃX͹j]`AXk*-tc+0_cCn4i`y Oͦis(d-dfE"Q: y}ǰ,,+ST.'L.o)S*CAݤTh WFBE8W/G~3.´zvPY64א'+qR-x }ۆ"b:Yv(9餵#֨b Ȧ%`ZCW m+@1j:]%cUaYu[*պtŒ=kGWHW\ /]%LGًZɛNW ]!] M(nMcзQCU*#+\2;mx>j[[\٥@K5N{r4m<6Z Gf'!nd1\lINltJOٻ8+22"o6v1 X>YWJɖfO4)#$T6fud=K!N. SnGi|3eZ;S1;J9컸fEtz)ܸi@KGЕ}`ۧ/˾W̃{zT7e85v]#]}mӓ2aEt%լr{GL\U-7j//?'wHtDب Q(PI$P.k}1QЉvXagcfiʬTYȼ/ʸ Ov.R^^ɟ.Ջx\]trs1Ɖ+4ஂ77 ?|6xV~V7wm~JiDLn8{ROϟ|Ըgro$?hc SSO]%xѹUP/wzX::yh?P!# Ɔ4OWrthӕx ]&w~4mܴZoҡӕtGHWPX])$7ЕעY(ݡ$9w+}~+])`up62#]=B|d,VCW ZJؕ |GHWND"`RtpY ])ȇNW2#]=B /JWz n (#]gȊJY ])}o1 -֡ӕ|t Y%̲y72'hq}!49?6j,ղg[vE&3_~`8\m/.+w{/ꗻ^btgvʯs=۾n&[?}_obྟj:W|tۋob-m\$E oo9UTG ^Cmj腇8|+}/4{_ш懳(}|to;y7UߴE}y+Oz%l%X3uqt[vl!_) \nW,SA|o7,&p7?gWHO/{{ ͏ ܜjp%{;2Uc ߒ8Jn 6-G8rRtkl%$[rMK%re6Hs fkc;6L~Ov*eґte]hP&&mB@=]ǻl0B!'U9 *N+cDzϽ#Bm4DJ7VD:Y0G'Ѣj#27˗o^AQMV,0!2A29.PRInH }:X:3qX 9\ uå1ZZRMiԒ}/D4!%_ד{(2ԗk Fe 7=CȰXF4ن06[*e0=!Ec,ѫ6F {j>4"k@)Jpޜ@ګ*ӯO5#H%5SCk)=Hn$*1oZGwR\k"7P9;rgɱ'WsHHI?E) ِܒ`/U7v!$ښ/uߺu)LIO:-aѼXVbZȑ*Pwr=<8T ;`J"sktSEk b =wHudVhc&˂Bф ix lt.$X @lfRc; oO#4>8 E BJқC:_|Ӫ"HeaTyaC [Fho 2q'YeU\ZAoNhĎ,(w2X8&lkqIi٘ȆR2CjPѶ"jх6[ Jr GNS K'FWkl#\-"#1RLl`6  MW %G1껡L ūZ2$߈XNqvL&G*zA Av%v= 5AnTzdܡaS`ducHKe Ȅv%HCq՚Jy>:lo-$$XY_wLL RC*]BIICK@4`CA.2ABz5+Atxoi¦:3)tn7 dњ@ Q?`H_BJ! ]T( kY^J%k] !QR$@uR":,%fdY>! Db dc9WVcG=D &O K_z;t7hok .M3IĞ$$'SǘAQURCw"Oq_r9`zÀV&7W~;u_h ުUzxԝ fBOB{T^˦TdQ9I t56bfZm*d`V`yG a 0_(XX肸^`z+R|$Q*L&rZd^[`K,VzGhK`%0/1 JWld#-Wx$ ;< cxs`YϹ]aQ=PtmOU;g@vV*ii6AIvٯ~'k7مX*ővb zBfsgdDEKre xC$*j]Jh; 7k ) 0 j3`ҢYۚBp]fcT;6@N1 bu5;VTՌ.#8XxZ Ѣ9#,@GF9LEEIbe!S_Sd~ȃ 8»x8"6x,},ʰ cA8P'Y3 RNXZ e"~Bg.Zή&(l gV oT* ޚདྷ"IIN  |4܍#f ifk9Mm/`ځ-nyvl cګWs:t`0uP g9{14z6\{ TI ΄Φ4VG]GjYk<R 2gv5\}Ӣ2#lSc8n0Ȁ=T9P.ϰ =78w0сr5w ` THςO(ZR|[6vƾ:B?w=?y"x) |rpO9ѽtEycf5Qr'֪- YJf$Uc<RuI` TATژbkc2=rǀ຅6+zn12 Z4AN%_#ٮLz,b2jc t%Yτ<Tmؘf}s iF] k D O5!-Z3Faвppy/FWfυiq#0Mz|dNEEqi(uTqCKZg …0ilp1:hKepU.I!!c-ٱeI_"8]aD이$1cT/Wn*n^O nj37PD]Jzsa76溑&;6 ]_?]-p\g+NZR\O;%4u>ϽHOo>lZ=۾{zhi5/gȆ4A4?gVxf2WM__ww<=řtznvK仾=yP]Y_mOvs?5R?jzgz84Wc#wuv9g\,6@-jIʉs~HbH9m8z;e~0QrbO`*F`eMrlԞكTbr@Vhb9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b\9 Ph-7Zeʽ'1tFVh怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘b9 怘:W#ٖ8%v8 eZPrPΑ $s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@+%H\'\hSB9䀬51s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1s@1йp@XozP\<隶RכvεfE^XR-`Pq`KcK1t{]0qs!BjjƮHbvE*d:C'  8%D3UrvE*b:GJVoWnX濯͜ 8~".8P<4/WӼ;_kYD{LZǯɱ]&0 }vXK-r_u_77Z+V+E6:Og9WUuFޫhk dpZXoʋ[hv۽BӺȴb7n:e.X=o_,[ v gl'U'g$ XhwÏnwh( JjA걧r)ٜbOAmCv@K囱+iŮh[Uc+7خZ+{`K8~Cpb&Wخ=]ܻ= ʲ]=%xgeCvEnƮHnPanWRZ3+!BFf Z ~v Jcr%BCvEmƮP9v"BK NveFbWV0vBJ]]٠Z]٠Wʕb*nWHvuvE<Ю!"+$"AݮP~| {vuxu^ϞԘ1ؑ5Pɘ>WI sso};@uZ`Gu%sݽ݀-ޭ zm^tZԙ\b;z$LKVzgֳޮ;N b讂]0IwwAkWIܝOGL^}5tJw&%B+Q „`9# ZlT̠vcx4:CNfVg$VVg U#xuv -Pv슮}Yي]ZgFUtlW_]]upz$عpb$􉩘Aj96A*mvخRVА]`M3v%ֵbWV7vBV]]))3  *ъ]Z#nW*3+-6--IoǮP"jvE*59ڕ`GpKAdWTT0ٮЮ,}!BZf:hŮHmc+T]]9  ߌ]SV 9v"cخ]'7dW$wt&74cW֞T+5<Ȣe-(->e =qt( ROX_QQs lOArif$1Ps31Vٌ]+t+vEj+8R]}=vv=\ A8rO|OajՉ@a*fW0]/u5dWyzي]Z U=|J-oȮHɹarnŮPbvE*ǶؕƥnƮHoƮHmە6Nvuve;׌]v :nWkWgiWJtCvEO{\z0cI+g -]DڨfZъ]Z7z"`ٮЮ:-ծP ¶bWVԜ]]y%IEZfO֘= D`IFFԡA6pw`zv q6<LnncֹD~DKA!mZVg(כfI֎~9tWg| mggb6InvvjoOnWu}wԺFVʳ]=֊ߌ]\'u+vEjT:vuv@~ dz+t;vEr bWɱF(/bWZ bՊ]Z;T:^ ]eCvɟ'5HSB0 !]]Y/hvE=eW$V:1v" ٮЮb{NvEru3WI5c+R &nj{xsw;h2xkQ\'"jKRyipsmjCQj&\Raw rx0zrf7CGGu>q{Se7 e6QJu;M\Wwx/^xHNGZ٤RLpqIv!9zՉ99vQ9S/R8_f^s턼tn"tϥ?~\kWu8wѸܴ#}wg_6PF徻>*7~̯ζ߈a}׾4y1擫+WzerI^>$c֩k)()M}mY^J'J;h%m|Kyo ϸ]o{c.EAAs**_$#k..RR'5SP RB&@Kt5g1{=/{cޏW^=!w/?#(MxҧO&.fӴ*>|"w'e{EnoFb*{(}v~+er`>'mMNNggGܻ#|}qvBF7*ڜEqڪZL.YL8dИc G(Ai傐8DUI97*(_.g[Z;qv49: l=P/?6P/K@8bשl{4`JJPЕ Ň SK!px30F}OzE0D.R+L9 M_`bK{ F\ThV^x]nw#"vR*w)YUuYݶ=ߓROgL*%%&M&UARlT^+"N̐Al5\9bE.*MKm<2:nmEf;dƀH$ hG&)tkEX_m  ]2۱DFc9:9Ao48Jҭm߲ˤo^a__m^O7 Ͳ׽yw^Ifv`T_qwЕ **SAa+ݧd ާ `M3W&\zedU*%0p{S'mu*+q\Gc%ӱ@ 2HylSw% DUĚl֮Xđ(ER 0XHd?|oBP݋/FefW1K 5`H(K{[$'h\*W4Ƴјbh"KS?ekRYP~_4G_ z0 LJA7R&Y.ms1eW[&ڠXɛ>giu1F4B OpVݴ{՝K:G!dHb;\9Kk})xmz G34@Z^_"rVKL!YSPXtN%RIVR Z,#1h1n 8idѧT f5Z 誔ҁ2DgK6"ЖM%-NW.;8?>@wf3\;QaΨ` swrwM ;k_#4DK߅}YRKXm-[g5v.J]\}`7v@I5>Va>\|-UJzr>?eL8 ZsWKxz+dR!8 ZtZJ_"+:jRSq52 "T!#Hr 8Ʃ3c_M0{ܶl _7PhU[{ӭT 3ZSLJ~ %2A"Fˑ#0=gN7{CǢ385 G>dao$,˦eiv1cmŬ $YtCʼob㞬GS"56_Sr6xEAG٬V.nnUeWf蛋㠎+lVG;tt[$m zI$Q4wWPѧwIv-|s[JV7-@aܖc\Cۤ6\Cv"wޥCq!}ӻ4q<~"R+9~nrsяMC?&'W&/?MxJ\Q{ 7]ȝ+t+f4L$= bJj>PR+9R:H#I*ƒI*8-Tn_*= o@`zovQZ'EP$eTK~T/CSm4|U*gЧMJ’(Q&*ќki֕~V6:kŽŵ9yMz888M;vBK*T$$iiwm]o?MvV'b͵-ZY,5XW*hU R"SzbTwեTdMKL ubJ$U- ]-Ihʲ%MPǭ2IXأ7S߭ kgE/k>cYڨ#Oe-KQ:*ү>U$ ly *6HK3ƘL5ghH**ʘ $qJ=HL>v-ZS6I),4Nv79cr3<ؐMh6{<&o]877">;Wi?ٚ' gC:`֖4՝Vr2LRu et8F+HqTdApz_Zr ޸F&UtRJeM$53vgfhiθ3ؕ M\h.<(!keeaMۥH/[x|i|1y9 Zh F~ B[`>KԢTڑ'FbS^@RTҊiS[- SX|;V 1ՂTVnjo3vT+.foZwe똵ڃ}oҎ|5q5 t72r\NAjOxU PHsVd||XveD1#F|B)SI. IL ɖ0$S$蓭-mG/7N-y&T!zH6Yt1'I1#vgw:/ΊS3+ٕ;o8O6Ҡ;IShB͵B*mJ[XY9|Y-x؛:wŽ@a|&" pξneFnoOVQ4I[$ ُG~DVlwُ?f1χHѫx5xGHnF?ռ1?ok˫闽+m+,i.g~h큭aYJRnh^4ȬЄe]vi./|" ^hjB!KD! %{fЇ/7|,Khjavu>yR׍O~x׆h_廣9gnnٜ| F?ߩ+D{#:qWeY#t?{ =h\J >jR\~ K;xw@[bUyκ Zۊ: Yڇwڼ9lޡRs/+iT#758Y*r B RKduo廲mwk?3;dw#9]ZdM7g]\:XpXBk (XVubYWQҋА>}#k=tsܶh?4&ګZ:WZՄ<:T\_Ou ύ"ڐr@BQ)R$iN|B )N;PD|',N &abiҺ h(4H{Zj] S궍1RimʨL E m %74*e'!8O9~q6]vLx/d:2?Mݬ΋OVCe,ds$$&RplqkVEgHqv(W Yfe綆%`4gcLD;_= ѣjqI_9W'ʕѯ^^q9I% Csƃ=|)ř҂@k^>qpᗭϹ_R{XkTK|fGxm>c^Fk~YucPn>R(_:9\gsce8D?iX1vEX\y1tM^gN\sNV}9Z|W ;^_n}fOYFiU͏y9Ve׿(U;XN*#rwRx3N 1DpQfdZ]:1Jpt:A餅C]1l,b.t+=zJWRN@ i_"Rwb&f>(H:'"dÕ&b{b+K@WCWܷ `|Ak+Ft(:IrJ*3+!]1|AF]PB'IW^ys?7cw-aI>UD&y;F <~]MҾmK7/F <}]K= o@`zoe&ۅ0nwkޔH"E oK~/57m6NNӭv}Y6a\rwW0W[g}xx`x|J gS94s #a΍>+ߍW7C+138qLxCƇR+u{ZdUwpm6P>(32#"Nll-ؾ4C +I[ =w&#vp%Gh5#o-JuQmKWce<fCW W\E]Fi@W'HWJ[{쪘ր]>tpɅ-bPr恮N(dFtŀɆv܋q}+Fiuutr QÅlbW֪tf+ΈllbW^]聮N,X ]1`l*s+F+{bzS+}_"d6tp̅bn]$]yoFSa ^&uqС|+gd URhQ {U J;oy!Jg(3RIpJyD\[Z`6 \45GU5J5OQc*:br+r߄K(tbJn9a_0[]+呟Cpnzp-JtK+ѕc6tEp%\ЪcV(] ]) JɌ; ]1\r+B;]1J:AjJtFt+bVS] ]Eerr @tpͅ؟%Ns `e6tp!gњއŁNyA6br+F+B9'dDW غl: ]1CkPJ:EJ'żb$ښA߳ y^]qcz#VIGj(oG0=Oy!J:`mG"\#A8*ac_vt&wFW;cR1ZD(F ãSKos}#w6'l2 }+F 0Ջ+Ы3AǾ6O\{vhU;3uZЕjס'mQeDW.\wb}KYJy3+ˆG ]ZtP=)ҕF-n+m6tEp=Vv(B|g+@@2+lU6tp]6 w4۠eg+BNv| ZZu"g`b0OڙیBיlzkNW tutZ? ZԖYu镠ŝ-'&X?_n ŹReTonsݠuN Fg1Q`(1uz'1P8s` ]\M.tEh1B tbJo9LHa[Fwm >2]B+r[T+S+݂@W %fDW lz̅؟NNٌ;Ȇwlփ;]1J:AV3.EiVbJ3NH!`+kQWֻB tuteṠ=^:P;tb+fjgx]\y2Hh[sr <d:'Q3)7}5-f4YƆ o1LWAK+;fAK;b1{ P+Tg )FR9Kwf7{涍d ̓8*}J~)W%TuJӢL4Ϋ߷g@HZ01%}M_ݣ%96e 6SMpfeh_ensIbpᑺ2{6JoԕVw> Pb>Ud=;C J ? \©o'h9zM7(qǪ̙bz.1LỉF]p7@YՕR^]}0.Oޕ]pE]h%2P*ܫoP]oH]g]p)E]h캺2PrݫoP]11b+ި+D+"]WWʮ-ܫ+Ι`#ue~Y؛`@xՕR^]}ʔ*-NRļQW-֝WWeIu%#< #Ys?^] Łp>ˋ~z ܁vGp'/k(E H,@S3G^\HIWXsp7 '!HEwJ/|- n86,gbUR^i +:GyLw=DJڥ>NDI*HCϫ]JJʌAO.|:grⰾ >`t1ξV|Z<5(´+Z1hwwޝonض5DIQD( %  4ӑք"dz7C,@ $U|C7%:+)&xΫqoyp ,\7rM5=z7BN.<2ä ^o\b:: 8[|R}=6>,rZƍdDVC݃d?!#1dذ5#Y89W C<{5t~Q-✲mBr_ד bWq`۟3&lL!o 7VPi )b`zR0 U|;(NE)B5EpV*,iA^w?`;:qRBKDT=R%-U)$s.Y(y%!U9ɶjilgtqi@i}TGmo*LiaRD(RYxFEIRS1Vݑ_lt'r=S8GodWUeͱJRrK<_ea~Ί/TJ<:{k W8W:A9.H|* zI1p+]X]-$[6?0)BPWf2x3].䃉i\KtRLFEq,keK$sD a {Kj. JH0bI׫=etv6cTH( 2R( ,"!O5a!#<Ш 7蠠,6Q@_x1Ώe@eJ#Not| `G!&9JlxX3Dk)":9\h\$V4>: o^:޿YAޱ (<. X`>1y5Ї{6Bā>At/HR;555Xs̟UܷR1B'&xVy>Y, 2MP2l$4D2_:xE8i_~> Ny%}?ߋ{pRU {`RbB4ϒTKLhh4*ӘŚLhT$!KL^ i5H!m,0cXj!B( P`IfQDHYI!8M2!TX a!n*xXOǐyIbj0sٟ-N\\P)'2{vT>xiO PZFbi[] 2IeL82xddI&qh5H(Bg5FI=#Q}#_k /黡Q 6\4^ .M85ȳl =PbVF?1?_o.[NdJ*Mv rO(p6 L 7׌r+zU1EEV%3/* K+kH3J{b]\#gʊK"IIee݇Nʈ(ֽtlRzt*SIAM1-uFDiO g-G ,luzG c7ځԊa+2Ya5 0iF ?*5mz|Bb^:p3s'wp4.Zxb8 #pdǍkM*&1*SR%*P2RALC8DȢdh^JWi͖car vaZǫu$&+,LY4b$Ѱ,L!e2AIyH% YL4Q?Wf$E--l}9[RW㝝ٺ:#38L$8E18J!`Ab"ckje4:F$LpZ^> ~=H,J2~"\z]";zrʚ8<>b* eJ?ZR`=40#%;>F6a`(ns~fݴDrY.Z&_0&1_d[>qˤ/)j/UK c\Qx- BvDnҡ,ث}x3Yw⬖1p2Ew;!Egg%Ŕ02gj9IYݐ{>+ t΍ wQ51ov^yTF!W/V%]x_],.il[ 1t#2̛5bE^tub*ƞM8ۗWZ'VZQIgNȟ|'0ve+oa-/ Oq<߽̀lr֏vYqB"$\qBŋ o賺/ZP$5l|@%pV{Uz|A2\tW_ }$M$YPt;qb'B1E*6T0OMRiYMRۑw#ۢ,>)i$s M6퇿⩻)̗Mdg|Jl&O k|Y+6QZXNYݐ{I֯ }?NBc'Idݎ] ^ ;~ΨL׻;I^c.u0?/~]~'U<\T$LQꆗ>R'ԭ#M6N5\kDR'a#dR $F"FdD$9AO5q! :ݲA*( z9i@#iQwt\@T_~pP8p>]^qlh%#XfAvg\WSedh͈qB#=?\K1?;5/fTy!N1P&H,˸LDpT8N9k)VN<'?렞3^onfO>uZL*vҢtзs€uE4*_Xsp7/zYq.8" pTLwc@&Ū ަ1,c ?}LIAˆk.:"WGĤb%gĨ9*Y مV9zccYFS@LFPh{z~|L[{r,0{'\c`,l:;"|7Y& \IF7i8Yݼ~*vp4B2Mj ,r0}uLKCQlj+4l]=a}5[Fd)qtF%(c6N$ai\*bx`KȊ_3SټP&ص٦.$0Ff&|/7]恡~^Z7K_1\] PDp ƭz5v+o_s Q" ȫdKp͵s  lp&~9p\m]O&@!\Ł tnL3dg3[M'- Ӄ3&$)<|2Mw-V'*Y"#u5ՓE5WG?( b%ۿ>ظNi#Tm1(AnCxu׈@ߜ;c@d Ga \Kk bq .Ӣԓk_m1͢HAqDqC*>\nyRo#BgkL!LU1[ -"2m>XV)5WƤJ`8$m_kKS-Tub 2 5dbsVtA5[iF >DZHk >1tҨF_pNƫׂҭe Ukʨhc:ehlBnY=|mxen|')wyA"ֲ7ilK5qX^]@kA^V "LoXƯ/>5mAyI~`)cƧ KGPm[>CͲ8`1z܈XaD>5{=4/gyXr^Ւ8[ТLo-sRSѣOvFbAuDX-uT'fQb284gTbHcW=dh&qT(eUI.[|%;c\RŶ'$%H9g$x myu(\\5T=xK\ ڠr=& U3m,s v ,[ eMGJ81X*lxhY0=Z,M<ǑM%y"U+īV 5'x:ٶXlNN0̒R++rY*_855Zs–?/L+ /t^ I P ~yρ` 2c)eY篦e\˭,P)-ȿ~Z$dKs70E\)YyM`(HDnuXV&jasZp:ᚓ-IP 'h 1$s)d8iLiD-$iBqPM(R[N`m (Q if|wsF ֒0`ņ7& P}rq-829#','8>UU mrZ BY@(˧^-ߋl+ϖ?A:,^f8W1i$Eqigi Ȏ`@Wm]$KjЪme3FvlO؍рuc]/4 {2xxL@~Q3$[, L^~>,5SS 0JKU$mor6*[V~c]1_V(Y᫉^K|;J՜2ʉ)Pni]Bˉ#.b @oUܳN 4m lp%zNKPnmatl!A dMGdxtx +hZѱ\&&p [YfZ5II$">LJ%ReGa^m0@bF|v5˰F̳^5&BIޞz5mޯR!2W;kSۄc0Di#w:~GGF&-צݚk]fq#20`\mt_|.%Y U4O8n~CꭅbnϹ&%莌b6ߚ fnmSFIFdO:FkB^sOcakh!qt~<$d&M %f`f 9 ArÕ&F~sFz0i0l9z`JD;L8o7\ 0NQwjd(1R&|wx!fcdhϭLXPV@K p/.i(xUSL+LMhXCQty9ҷ}oelb?^|ݣHhQO_٠ED6xH$U:[!)]վ)v)@h T)*cg6  {f)XPpB(p,~?4Ns#YnU<GeyE"A5WV S1ݛgf5L䯬]~iuV6ƣjq|0񠔄1sj)" 5U,S)~/JHs0*>SŞD_Kdvб*ha*G!81W9R;ĵ"ZN X'8[ILIo` r+P]cs|A_XQ-i[woBI (`!QuuP`4cHN?KOb匘DD8B%;ԍR.TX[޿"ظ0 ! :٦5,}e@3qhPD@f8KɕD0|V^YV3!Ԙ4 9Ђ- Y#Z65ŖHe.гzKb዆fNb{R+93K$Ny!ͳP(X$! d 'Q1 c2)XBW ^.@"Kg*M3D =[_I!;S0I\?0R%T9 FP `MK; 5i`K _fsF%w)~;s e]F~-sIOAE[" : rΤJFc[!]ggRQ m]hVRHRxF@ ?*T41yQ 2r.ŽBt9JQ94 M0 }9!͘/Ԗtg߱e;Be.1}wh$PL4.9LBg>%ZUZ<]8fjx@0RЧ֚*Xv)ci5Vi7.m$ѣV ]*c*Ebr4iYj♠Ed0u)=ͣL ,c@ᮛR&j)zJ;T;P1磍6Mz-O?^0FtItL:N A$ LHZ(v1xY^=Qxj)T'J18&kKVj +qkNw݆F1 T4 ϗ]Qcuۖd-FY"&Jm_)i%ZB-SCR9}PTa C1 a9d3"Ź 0N;I̩q4:W]?)=XSʱ5}܌29a;Sho-v+&1c{:xXw1+cDt\Sy D`cP$'(K/M-ߋl }f-U_?ӣlireד/Ml:Mkl2nfݷ@o! W ҆J$p{ipy]mZ:J[Bv3ۧvJ}HGD1r8He yJ' 9u&HooFayYEosx~ #0존v α PYt8+}[KWBBja-69P$!kWd"~y<׻5N21s1lF$.(${IGe ȰHB dDBŐA}Od4҅Y./) )X S>`{ #R9gR Vo#FףEU&7(kܸv|z3BZB/Ӹd`ɯ/qƸfMcU67=jd Nq c]K,u9D{׺G3qLg'SL qC+ 1u=NVd 8__;;# 8YQmǐ=R;="mĜvaj,#m Yj!a _F~z G5Listq-g. H\IFR=^L \HϮoa iu[2g&Ϡ:rа*"Kp<K{!)Gb9>LϞRjr5-^=pBw3epݖDNS.\lz'lw̅da:ܽipApm-ko3ɮ=j&Nڃx(q{+U~^5?NGPˆH C'iQavɭSC!qlcb7=Linp|ϱެ5$w;iiN*ޘ?^kLew9I;fcF({ؾ^ΐg!w(GXeVc:0 SD VEƧNoUJWܕRzq- /;hRՓUz0AX:-=K!z{mO-D&nyZn3wfh#Jw6b*Q1k@EaXVHJϥq̽gqWXUԹk۴& e~nVDVڳȆPٴͷCCA) *ك2 seYa֡u&ŲTWgbLfM"ȩWf鸋Ur QSŞk>[JBBRG6e"{h\14Fk J 'ǡ ՞ks*q6Ϡ@SdiG%ɧuAUnZ ȫ@~\blg!?&/\$_sju;r'-6t]P0ElEVB&ZikR+L0TIPLߗY73c)rxS +d&DNfkJ.8|~ R=c0zߴ 9$%C4cuv̻5\+>D/6Fqu8sZdQebk 棟j/utpaIikq@PO\4KNJwI ޖ4[F%$ײ8[4zp)*d1_ ,Z#6!\~SLSE! ۖV\W5&XMyAM&=`~I0Nl90'XT9sNиPy^0bKa~=9:e*tSx Vc~u*؏ŔhhGõ[>4 l]ZITxwYäqt/a/z4ibVK,CP8NVOL{~ ojE!U"h襤ݩZfyį<#sؤ_js1v!DvJ}I \b54>vh=q8&I)>QBtQ3`-߀g3^bjڑKDa$UB bLb LcTCvv<@U9:( !@F?f *ŕ 6{π]!D=IZ }8=pڙD( b|cD=gD>+qy UZ0OOʝUzWz{˳)|Չp+$I u0YJHh ږd'l­*مnM,zɳZtvvM Þ*lQ~.u `Yh\O^[(E^;.6AC+z.(Xxצa${*f|V1mk>Y&F3j5D$vI/)Nd@19̽*X/83?t&~Hm~z++!m4F@F/WБV-Dq6^aPkU¬üͲ-$(6BG/怨bW?/wN>M6@/ɽYi߿jEwV"Ip cvX2+\fe65>`Z"LjP@t1WtzltZZY/`ymg.B:"NnL\(-ݧ=dn \i° p\ȿտeTS9.%@N>- FY#cluaO'^dw7~u,.VoF:.$?5Yxx@1p Xsoj.5S҃%*4def k"ñd>(E!6n߶lx,\aTS 8-l` VL do5x`h}~X)Ah81X&O{EQ4I_EKfSJV;c누 JEΡ@a)H7'&`…*< qykE7r%$ì[`!I|wu?9,}@4'v;ǟ~|[~8sSoC>HhgI}U?a:J~NhFT^Q)o$e ~3ŏӱ*_=08]|4-˼.)ԷBW5vU˯\omp+CiL3#AbIhwjEWV%dZj5n.7_[ER*ziH*ܩ}iE}7` 2-+޹}R)*yLT FzBQkyu΅Fͣ Q~Dxi&,R;BJ)9>Ò~ӥg̱Vmo;QքJA0簮4#vmȸjx=N[omy?=ޫ}'Aqq R4DƮu?\͍*Cj;ЬSJD+Ÿ;wRi"@ ~(r` *#1F54XTDZ=F&aWX5-v i+~=5d,4,)z4sG3yDD)-}V"j۝xEfE9$LW;uV躆m#zSNZfEt@b50+nO.iT31{dRRZtLU: _OAUhѵ\F=OSoxS-'ǚR>3j`A3G q+GB"q*.$C54voI'fЀ9R;1ĝrZlUffU[ǤVm JIP +~g&AĐ$Kt{7V݃mb}U vqgJG㬫, 8Bvl;ŭj>8i &kN9ћB{(=Sh_~3u,|j?QCP|펣s-h H:t{u '=y0ɔ20NGO0}Վ-Sc|h9LXjw3dVj0:H@Tm0_Y͋i 5 &h%ꕻl|Բ/ UU Jy6 qfgw^i[4ړu_9/ֺ: 9nvK{BevP!8TC<" )b2*1Xxml1}Q2GR eRH=}F9:8=5š3~sQ qTC㣍wjU8x|?5ͺ̌G7h`o;TF*6JJ \ϼ/;lbvz[ճb@; ~ix:-reh]=X Ziّۺ-)pZVm^]>~O|/gm>>ibFFFe)L'ZZeb1`&FDAbAP@X[rQ_[rznh@0an R,S(EO2t$BʑuC(&!FJ! kF=N 鑉F*[ ATm4?3'W|+eL_r 74: G r&&©±ִb?~٧םbebTf( /'1\hFQ\Qydᕎgw s7P4{;@2!$L(#ì̮e K@R1IK-aW 'W [?-0%*w*_2ۿ>$tv TxX[Qb T,D+?7DVm|[e4=xOy>?߿etuvevqHGvVv{;k܋: < o|4(*KzeO닊E7۴7ؑwׂ*>yhw>tz],[e3ĉL, +2䁮J>G ;.f,]HrA߄xZ/P?xQDN"!0F 2>ATa@54jN*ug %X%(]vیjP1IM@E$$4++icAiSg>~YeZ6B# Cz\hzB~(AjG]4UV#S6\8K{ RJS܍pI[41/x\BWRΦIQ[Z48NbN`uRT][oG+&-y>,A_e%E$3)Rn^l/ڣZ*#]UDMyYW"/au gf!Hȓ8\,qYTI#MUt*OS+,\~p\fYٚX?36cv*(841peIeC_!\W)gL72ݘq\NJ21.g,+P"*Ff#҄dm6tM KJBkIK Ǿol:.'ڪ tt ޠ| QN*gJ@9'Ӱ얄ʤW]#9}Zt)㔔3{˦+F~O,f8nP [y+vDB%d(JP:.Jrkw(&FZmқ!:\V9ȫ1Z`MXrmY/w#O˸LFRfX~?qB\uoI80+bX WJc^0Mk7lNu&#IR`V5{}LG}FK aD\qiVٯaIBl,X"F*Rp}^rțQj* =(2-R'.ħ'xab 6kᡅ狝;qj ֠R.3M&fŠHPdTTfxi&)fR>M4*Iu% bH}R@2 s {/헬YӋɷx/Ԝ"]{(O -8O69n$KlAEB*dlۓu=*rFcO YaaG`*{?^/Nl:sPJ\R؀q\N܁#2j)Sk$$p]q9X5bKqp첞t~?gxTeC&uY}i:SӦxFD N/绣LV2d6+"+5GrB;e7Sn8uv@gTRx9%ܸF邁86[X.AFjΊj0!IQu%5(u Jzс+wW:!S `sfj~̑+d\dRHfش&NqXsIX }6kr/W*ah0V;G@9Q..io{>`oTy)#4xi}sA4ګ_]`TbNf<%[!ae"Wֽs7n\V Y[a: gJ6V@Jט F 2ubq ͬev:z^q~n|\g\r}6.?U?MqX!mN xOQY0<`bN-Y1XyXαAuM 9{]*?@8qIcR5V Cx!MwfyF# LTDliᇀyֺ)g@\1+T/^ u2{5Z:QFDoOVL(mG.dNj}J0^I˳;:j"@Z{g=#Sb Vȸ濆F9ՕXj]J\aH].(ҭHQS *׷:Bk*K{B] -PRlXV!2&挶a{bb\2zxAeÚ7üt.2ö"$?Mk_#OuP')g U tQRCesȾ!5q0o 9qs1 j15GBŎ Ԋ32 7<n9\!cȑey(NE)IhkPmhtR?3"r/aԺ7Ѩ!:BB֭U*:-zswG)Is]Reuf1w6)hE9W9a"S4\ȩ&]SBiP-Iy'jsT1zΞݢFb>+d\*L2D F I)oI Z8ʈf4LuubqPeہæqY`ކHxض$;m,x0CŃT^AEafx@U84kyNI <DU"czg<4=l:+ 4,c=9L [D`hY't]C,\S]d@iEحkQmNHMYol@zw,gE5exXMIu|R_b}*p_a~௻M_wzl>F/mG1ưc? XGs6z^l 2_5ЈVY`! {U9&- M/ = K`,}7Y**K3+Tt! q2&k`@Ι9M\(B`n-c'Am}`SA-4o.(jQO_բX-r=Sq=ǠՀ2I_@[MNWK~tXg[I*E:P*?h?2ԊJ0jlj;7oYMk*d pJ>IH4Y}VßaLՓ/>-W,=\kLg 6\, 1 !W:{tk>,![2}{9F5!yJzimtyΤ&d>:q%fU֑88)SL9xDb3]W!qjLD57TȸFǽe޺!iҦNؾph%;x׉r`:+d\\ϱSVpʑa^ |ii[yxѵB]a$?[a<՚ÎiNB)PMۭ >*L՘!Eqi#>x+ ކ8820 Ә33u2ks腅lߵkyNKl&_ 跶Býex9j{JR3+ZGlg̣J8jE]Ǭ@cy, L޵jE4 Mm,g=}~\Ό(?Z ICv8/փ@Rnc-*m[T$3\T7xV)ZA&` }e~.3\{OL}V\m\ϩ7}nCۺ@3ڴ=O*Z쬈aIi6pSQ?b $|ʠXΥH$K/B9"=?.GҬ3mm&%)66My&EfUG ]S\vN2 uqRaZxX\#r;7ԜR{:T$D uf45 Lhkᡓ65f ϟk0cN!"HT~?gPsU|S&yVӳ6MPp}*00D Srf& uT3d&n#E9aGk~j+q׃CҜ jm#$Ԃ@9=d,>@*k, <s:Q>Qu+S|נ"*d\DO36k:㬶6ù4`NFF\fIt$D} =>-y/f?B)\vkdBrƾEӏO7׮`ԗ+WM_wy+&/u>çhg7]h$yFf>7 R|UZkۣ+d{8:?%"ߩqqtI++C0PhK|I@LuًYZ##xÞ[M}a>My|fTRk#ӬwE+,u= UY ;TZ kZ+]1j匫 axZ]Q k:$jm>%N aCρW 2w\q|1 T!4eJpN'4=DW!zi9苖A;kgZԵ$fOĂD(ԅWJpN}{(C5cTb7֌;4ufN(i*d FÍQS[S FV5̆fX}j7`uمjoM}_OOf6QMp2 g8U·fao6<+=?8I<쬾.2K>g׻4u#D4_MM?Mhvx}XgC_2Y(~6N0?x?^#*^V\wvQZ+J?n^Vϫ}zS禨Puֱ4JIamhy2ɫm?Ec4obtЀLr@\h.8O i&RbQˬ``^7ཌྷDgY6@hjq `'؈.bZ1ʀ;6NaK~R#hhl3f@REqiR\kL,+ ۛ>vVĩJ7r!xa JM%JJQk0pa⭕Vl0Hg}-@JY?䜳}-eNK*ς5t5_)˅0[mۚDZѦqYVRsK**{J0fێۢ!78MgۼBIέVuvHk^6ǤF> 3BGW%)"L\8Ϥ+xi_D; bJN& 6T0[DJU%jUs.P5 I^n;)B^Vp4zVkF |n _O?_f_IcAm|ُ[z܂6+׻s]nx ]_"uOjo(ks<ɅiP'b/rTp4\z)'W"\(xOGKTvHfwp ^HS]VHEBe87 oL6bpK4㔥3-\hISki")句k'wmq;zY,ԅ, i7/ Y /g1Xeil'/k,Y={ڶ5`$5bbZo(io x{o {'7qTO6(]euX8Kf4QYoA?_wOwQ*, 7yk}"xcFі\+'2&?sh7rΡ2}}h*u\%D\?R=~14d~BDyx{ oY1-0'4ƺA5D9.vg_L[?'t}:yD3w%]Ǯ޽S߆(o>:Z+OMgr@+ԋ` w^z7Rq"nV3/1ünorz5/V[ż2 Uja>B0_@#wN} 19`!_)G'f4x[O6Fuxtg%cFBe#XVpK$tm7cl*.3@^Bт<.@,gpm"-;.~y}2 ,|Lp籮uY%BG8zcMj)gFYv7ʫ}oF'7?P43JY<雐/+! / ̜2?QHR`@X,_X˶ "x~4i)Ж%f9c)͎/.z8jFmJV8U {␭& j֞>S}{ jWEVȪY"q{ ֨ Zk!fRUڨcJ+Jpށg~c` 衔GJDɌ MF5YB)K(ӇH_+ guEY3R.:_>xo({x1~U5h+Qc5l[>T-Rfc2uJGSD-a3Js.M5Ibt9PC;GU[;HdEV UXvѫk( ܺB>q g4(jU!Pdj BEPL䕖mc0<~n@}s;?X;hY聉'٤2SuY'%šfjR61dSˮ%'h:ԘKluiGH~׼pFw ;t!.{7 [b@;!;Ysoa z&8g=/7:jFVfᰖbW*{*+(Iź ,`c\joX@gC*FWt"?FZ)OOdkd\;P.`&t2ֲFfldo?dje図 c^k;K'}5hIQ-2Zij(`J'g̮i'ŠrliuJc̊rirL̖LG?;>/Av8n NώpL__9kA]7qU?i\᫼Ƴvq}z땔>JT;/GoޭޭPX.V/=eS_y,$3Oۻ.i8/݅ڧMYG>ZZTώV6|gó|_˅gǗ+s͖𵐩:5uؿ#^PCEqYKnK39njՉKtQ'͑'ʩL7j=ݸ|k;H2ceh+CXZ0XD:sE}xOfYg}6Y D0xA VIؒIAVEJ(-E[贒)[ZɇY{\f,#>zB5g{~Hl]2;1!/O_uM\2bI ϻf13;Z7Fڍѡ7Fl;w[w57t_ac=s3wh=s;4{Npg UԳ-٫C >7@$ 'T@9*A zhKY$^M!PbT:ZF-USqwr:l\J#WSe;.!Z3#ClxLk%-ҶUiH-Uvc,00] M],S i))'R6DE ,hR/Im}"Q[t)Vr> ?yޒ( . 凣oG&+T!]ɇP9i羽WUVukhUVukWݚfYgAMK̢o9~]\ H3f adQ>,ʽgH{nŖg4 Wb3*+h_iQ]5o`<5ح:*#eU]DpUcT-ٷ\mM!epŠ1$9d5K{?\-y'(dvNX7 ci=vwJ;v}ʹ;.@b53p#-J0DM8T6Z/.%| 18jIdƼ*Evx="(?&5z UI6Gȵz֚e*.dū'4 O&5u~0sI|%3n[m!Cn"ܯsd]hY>-R>YfS&tg^j[.>]ZߝN=xbyp8VN-߾~53|KL޼r V葿GOKo#cЂJe^'9NRXi%-s/R[)s ,%qK(HWɹ 򪖕7tgVW2H 1Pܫ[ @P ʌ";T6MM,-MY9P_Ϻvሬ$1B)잟CC >/C|Wz;>s%1>n;[qO-dMIl<L\Hs<>4!a"{Xzٻ r&`\Z \]n9o]ڼD ~]>g3ڲn{XTSʒoJ4ܖTef0x2QcOBf,\jh@ff3o4̛?# dTj ӇҚB2'7Y٣gf(|U DR jQZ/2s_ԆV>|U*URjJUTZ/:4⢳lLjR K c~HwZc3Aq fk#Qt!XN1qZD׊H$9F%Kt b:ϳI"2,;k~.mMJ,:XKAoF}LJ5ekM bt^Tvd-ZzI4 dϳmx(2ިw(^}SD-2E4nr bF-8p>i|0 n+P[l"^cz9ۖNW#y_}5z]3gY^]s7tq'8 dJU Jve&x|s@h y#+ f0]iNaܞc^lmhG8ڬf~yR뿙瘜Wl߶9װ0E6Ex.kh=7.@)P)oiF410v▜ۈ^6?e߭Yvp"c~=)-d>kdvvbīR<lN9I0h\]8gpAzދĸ#F4@2@Ɉ̂ilgLurg_=;kmϟԋ篞3 2勒fNe[vJZO*ƻch'I/YZʍ0MGCK{rB03;y-ve m.ju)ۺLj=,(o %GQ;Uj*ԀBkG4~e̚!~&yȊA(P{\;9<Do( !!Ʒ}30WiljlD&])Sfes?،odt1eNQo=Qviӽ&gOuM):V{%jml'9Rj.@-j>J&>yQnUokȇwT;+k0̌U -R UO|Np @98gdƺ S*dx85,:WXXgٯ2}&qfzt`ݱQj7FB^g8 ^ ];s+ߒpe[H[QbL~p[O_Mp('6CPy׫#(8*nR^Ŋ)F">>opJ=J D4Z};fObX^% 뗃nAPcw{n={.NSi/־^S!ac:͜C@!(Sb$ 6HbH !0՝{J~dGvj0?ƽ;=()R CætJ #W耹;^r+R0"۫g?}$>$4ql-vhxjl}yhv9 l8(i OK~vl-,Boc;,gv11w)DdYi|#4K=SwMROc ťw#wmxZ#،Emhl>2N^7B0.څ݆9U3xq@Bj BlV#^f㽗~眕M ȇ-6d,6 &[m|+՛hY7`*ZYbYκbX:}1<%j7N=g!DlO [n V9 6ֆb9k5 kx^Bۈ:pq)e}[D,+I3ϲzy֌Ы{yU'sAW^뫫 `嘏)S#7x/6>u;m|9vL;qH2'ݛ~eyP\=,?Xk99k_dJ]t~3u?:{lh]ۖX$)0bSx^)1ʝ0E-"o7(DᠵD׽T*+rnǟ< K+RYEnjLh7Xǵ񱿿;{VG{V_ ^QH]5O֘s}zO,GŤ;GgG$9rzD^\xQdT3 ,0.\Тܝ6ru3Q/Ģ6Sy]{ Q=(êgoU!m:8~k3}!,MN=Sʹ1 sH5HXvc;p!ۅPŖrŲnYü2}6Z/k6fa;Lrɻ~nszpX74A-!!{gֿ4nLE2W2`Gh|"cuNj~xAm`Y2P(Akn߆3^q_ai'}5kL{%l|>v\ hzJ>l)ʨu{W0Z,t\}޶t3^}ug:a8˺m ]s&82Oi]l:Y"-]wjgmtOqԖ=[P}Z~q/qS&#_X\MҩwCUlR1atg}z5+;qRs5nk-I1SØ!|-w.gş0^9&ctzt[W*!vK>,]~'nۘpKvTWt\Y]Q>J*f)>)pPb\|nC"]|9fs&3hh*9wa./њY6 i_;°RU􇧛BRs*7T"TĨK<\V`DrI6>`rKdlϚS*ln!4[Yflw~ :OoOZ_]?PP0:dӋyKd4LaI] 4e_$%6J LkD/ l5nK̳w㏙cfRlޠ{lHmu)SZS*8 l imEw5qh&:bg3h-թ}[đ~cIycIƑؿ2^@$R4}ܙ}Y,xvjNکvu;m$U$ECJڕ瓾}46F7gBhNp]w]x&?kwޮ@j@1߫j𕄎:CNބ32z2'{ѦKܜNGO(֨VZ'lisQEt/ rIځ 5@IZ>r3ilx 2Y]z:@Ryq}snBɪ6pRKPz"RMiBY\*\Q:vЀBTB.MlE&K# paț?cg&UH" 4-H8k5}֕ B'd-b̹oU5ԷR0`^ * z ƔzY4,UTGp&N :@eof<+Xj-y0W5;NIcó؝R[`m܁ &,ս7ꢅ ?^xAAh-@SLd# W1rcE(5"9;x㬿BQ+YzFGY6n} ʳvꘋY0Dj35w'6 SgHxCX|<˂(+t1ejy.+1B5st~>J2^ !Ab4^yJ1i'EnB3]lFgp$]л9DCVPy V4E_*#i/5w(e=q7ajw{kMYLw\֚8zq .sdql I5&yٖsHdZXcKLgw3j44]ΩI )S.SlZ d," kmb%O[+\SBlpN`vb Y /iWi8꾁zىy2KB\XȑTxO㆜iH+]+=5p& 6(qC 8DW(j9w 0 y-섿b/s^>3‹%w`ޓܽ$wkZnݯ9-% 9cgeD=.ZF-+䯯%׾GMKr[y:`c>O4_I/>::ᛯTB왜zyɚl^뤬B0*O`RyG (&dzՇjc } i-dSP K9D-nl+ Cqk }Y5v`PV\N:jCؤcf7w>F#$Ԅ7 CPKqʯ`#^٣g<-L*?Gmswpk%ZB `-RU0#+LPjk`uS`m2X}s ZmB\,P͔$0XNpXZ؉5lC1g`z ɏ4['DevWU#k$j9P۬.y66DS FG}gxЍLi sL(XK'l5z|I [GVZ c{%a$lq}>ZfNCfyPY;Af<n/{+gqABNeGDcjّѵ8B&sbf%6B?˦9gZ^^{HnI3 %-ZMlr5GXF-yqC0ħ/!Bv6dtX~-G2xnmMW[FæQޟ?M;7-?5Ii{68r\ ֜$NkLxk7,JX?¤nUq"BI;T 5zP 2bM U 9kÆU[:m8րyl #&1V6I V1k@'0 : (ڻ\eor#6胟گn1hw<L`Bc5"b`Jr[-x.)#DwJ ?iue41_壹svMLj{~qzw>{w 5I#NӐ4KJ)&D6Rcv{S΃a pɛ}!QFKʛD#t_(Y8VEMeg\c؇i_ړDq۽v(;-أH X Õ ݽK2-+_`&v3".̳sp<77z^B66 [bY3믯~z2 '+sْԯ@^]w7hAjkyNY2Wwo+ ?eb5UoԲ?~QߛvZsEH|?zuCk7oV b΄l!՛Wc^uΗ[Wn+86(̿_n6A_6Bx`(ȃ W(:@ Quwʮ9I uySV޸w!>}R\]iieuc33A#x;xa6Mv&IkjeCd結h,pO%2➵7tlK>.\wWq083my-D{D"Rs`՗ Tdцc`Á 0I'A'Z&e@E0zjKE۾Fh'@duVrPTqBV(tjj*:-,r1xC#4NHfc5E#Psx <}A"kRgfpi3#u&vdHkni8=-A;+{nzrrBrڊn=(@B͠BmQqZ.6z;eEg4UMNJ5*Kظ`|aaVƙ){u>Ybzٳn@'{*d9;jkbgX:_wb1Bƚvh j3g0^$֥#- \`c8煉 ^ϴ0)铓]8[ˆI܊D{򀺐,)ObFJtsEQ rrdct5˲D{KY1>Q9߈mu?+*Ŋ^y5Q~X#+t|OOZΧ扅Is@l@ѩ/?1ADVArsD9b.L \!0&$F@&`Ψc :Md,- C07.e,OOK*8[0tr_\]|G⼏h.FQwrߞrN՛Πs)J"fTחm-2®.l/.*C`1IiB,f7Ϲ7H4C؟\9  ʷPDF! '3Y;{C)`D)돿yI"C#@ PQyP-O65_rBJ&p.}AR6r?L)c(DZk:Hh00A3Km4&{Ogg#L 0#2yK I^Lnb2d܊)jFkv#8n;"oGN}: Vc®7$~3!b9;$G^BP:{>vf:yj˒ZnO_$EQ4UIC.G-bD^vH/ӿjF 8GvLg+AAQ\\ImDl۷N懟j!'>Ba/<^䊫`$$*&L0/w_y]=B\858:R Zy߂JĬLx[y9X̱P*5i]0ؚkeQ)ԘPdbX v.lo1/fߧw`h1р!o8ejvkq+Ɂk筽n[NUxȜgh ,NUxSޠ~103O_ujPAӠ<ب Zn5:k*ٗX u*oߥ͇ 7rblexX,]”s1- ]-&$K% l- ESS}|:[M{8]݊VRs0IMoPZ);hV[qZ @Ga6&]}L6UJ=S-: x1Bc1bEmB0u{kso6̛1'`"XĈ[s!Ɂk~m_n3n쳅H/5_%L_QX_gw37n5 qU|_g}$\VH<3$@*9hIVE}ኞc|s"kp*!sO[Y,rӣO< 8Ƹi+˴e'pK?/vN(bW/GqaHEaT9ڄSxtbIH Ƣn[I޽&?"@[wXgMrMmc51KWbvq+ȁ jpn_6'Ji]sYV(rJ!n*-,L"-l>X@*gZfwkڢuAYsy{tb~١bfZ|ug ԋv\_:~ 뿐>C%x @#T=@7f(z]w7ŘGт `5o9tMDq8(,o#{um{Mh,<Ot YvoEp++c'A̎8ECS=ko)kW`dygZ0)k>e7>ҨaY`X4=$_F{}=rp{^{t<@o\|\^GgGGݠWW~Mh=K/I^vrR]y}Uivu<8{_xr|f~7` ֋sܮg u@椶VE2.M,z?L-w!$n[g- 7,<CvCbp=|ۭ$2cVhrGˇMy=)ޔO}-1K=h┢]Nz2>hRSv[n85μQ0Ʒ}HyS^ck,͙o-S3A.lN|7?t{fz)''"ů*I]_]Q u{1̔MboLQ\GNj K &94lc>[fH|N6s4`8B}*um9;g>.ށmo{CѸdg7mo[y45b?԰TD\Uy+Ɂo)4XoUr:ѪcU~Z5K@|+&F#2Axoo /VO]e̊#X2+ OUD,àwƈ2+PfoD'2Sf̬~Lo)kݝ%ST4Lo/3]E2wr//W;0t{uvɬ6e\ӒiHKQfVVGĮ]ka#HNja?Gn\2ty=yS8{-7ç]:I4 Pr} D]J]=+aJ c>S㋧0Tv"ݳU%qKuo:䰶 doGBN)7 T̳E ڻ1Q-)hw< eFnxOή[QHxx`wx_<i+ZxX,dZx:ܭ9C6j0npѰr]G{:ܻfκj*E9:۹Rw.ϞRߓ^'zPȵŧ䫊h>|'sf أr%D24 \{_BA >Lϕ Z]NkDRKj)c0ɸ~H߳#Ğ؃Θsszzd e.SMC) 8/D|-L~ {cK~ c4d}U^5/}[NWpzGSM#pM^̦<6n}~R}\xeշ:wM39F$IcZ:kբ]vƜG^9K9 qp|1*+@ެ{W9%bp!bc6m%z**jA(L,vR>>˄E> 6:ibԪ7kmz]1 Xkʀ Xkʀ?`m$$FC@7tY/ͻ6~Z"3(uVȅ2l() 9 G&$ 9E F Ma$Ef뿚6}8rN9bP$rκ BL 7YYX%R=l|n9r f7JERǶu܂Jȑ%yӭʼ4y9L*Z}'o;tN]a%h+K|!+:z\1^琖3WK[}hyR؁d/Où<)Gyc<*6ՎzyaI#Eu҂S"g9сH"afm~_$8'|%SSƍ646mf# g~܈^lnbK>zoVq{GP61?FMd廒(Ňjr\h5`cF8rdFn&Jn=]ݡV kDzk5;74@ц:u!1TrPì,{9okݛҌ;(nmY Q_Ox*m`ˍpq},e1gpkD*;ne =,I 0S"eqc~mdp EH,-vcN)UvE^sb` <"*UdwSFSEv;X7! ^E*ep/9䍷`*ni׆}Gv=܄#Y1I;l{[݁B;2f1z [d>y{-$sҥOԻ ^WY|tҞ]ŏGozq3EE?_b'&±7bXr [AGEl:3٨ƪ:jt[q"<4/Zw|t5E9NΛyh:(6gFRdVSux; pXNUOGdcڑuDVM 7鬾3q}~vq{ُ̏gukc3FVpHQ&eQ3>e \w:ՑSwܜo".{9RU!fy[yP΃J|ovAybU4'Aj^b s)GӼOg=SAC<=]3vQim&~y?[Rǖд*b`hɼ| ]_Σd(YZCUHhDpQ9ݖQe" f4FCj;c1bٜ KFJͧS݂,eiLkZM%14%Y46HHs6,{& (6 UhEm2.a,ə ^~=xߤ홠“xgC5hx6WOh9S1&k9&z[ױ(VDl;c CQU6(ҎZD]Q\"N-Ccpa1sFL^>b-F܉H 9,Y/:+*"'M"FJ?>Ѣ UDWc1䆝,P9${K*yɶ.\Xt Gپr2YO#!F/"Blh>@p"֤RH#BK w.IbW 4K#=҈,@qޭc˝7u/g zVp}8+?8WyeWH5]c.U%2 eP·|ވ/N\y(s-M"MN;{5D"8p +v^^kA{Gn8m4аNÏ;~켦>F].NdshXslYO7t˘;qcSXrӽSJ5i5'd5;ӧg_ec|Ǟ%EYO4bҬ| J"yk~!f$*VD_NauvХHZx$VAVЊ]c$:p^ yAh<`XV"S`ve<9 Fti(LDҏQB>%6IK(W:?R4:,VwA$k7@MYj7<3gkeݴmVl3=ш_M5QO_1*;^Q*nFͻ}:@ݴ׋ a0nxXf^d@;SoxgS nAu)ӛRz|suQEPVoa+LwlIB~Vƕ,N{k;nEj鋗gQvׂ鱈 \\^ͻTUi{y?\|u]VpF!Jrk}-[I.9/JeJmXA ՙ۽ODֺP@ec('ք$Wo{G3>)e3^Io&zT>-٩ڱ ʆQVF!D&Q'*˚TI=RD8 O]v'6l=oJ"u خ˛؛|vݘ,r&@eLd"E}DŽ$R!Y-^ȚoF+4,l۬悷 ~FrU㱘q%zxEeO›OSnO| _篭$Vn7Y%Nچهt L\ XtRtEh;1gƜ[ǖSXۄ^ԗ)*5n*s_/v(}oOݮiF(˻h2Af7%Umqd'9(Ȳr.DČ&&Jbm|U"U$P$)'s)`wQQ~bg 6sש)Y1YbIk1IJ?fq 1N8߈1:KEc0a 2D0ۑwE(;%@06mQśV+ >ViU&0v6GYR 3ɢ"-t꺅4IF~4)^&8O5%ה_S|M?7JSbSHbJTk v@k!E=G VK j{2^l&7ɜxsǀ'$m2[j{({ -/m z7[J1Ω]s/;瓄6_җBƱK{^DT`Iˊ593PĂc!K`!eXe}%uXi3Mb}^m * 5%+)!OOVSB3`D*ʃ-@c@VjI 7f nG]ρƦ2XW}+V\y!k9^f*̶^;_gNEFf^G>,M?5]Cm~ ê:~͕zFn$A"P0޿)d*Mh7/*O~v( \yx6B u}ړ%_ TECnΒ7 B lC\ݞC;Qc9 h kW@ *YV+}@;Qw-9.ƥ}s͡2z_ݐsl`v{ܬ{p4@8KjJVCսAS&j _d;]gk HEaTu 2K74(V=4[i%ߣJSEi3FE`[A'2t[S$)=C.bCq]4YSv5\ v%ufޅ\-2ء8?A_Cj9)VS%57/|ENXgQy:r>nP]?_~3\;~fLv`Ƃv1.=)B4Ԋ|kmFE˞Lx5`3s3A& ^6۱d&-du5"Yw0&?*P?Uk@紾LӤq I&OMQZEZSlNϛ!krDE +>H:&55+ L}gƄp _7Fj7;h='syCX~Ye؅[٨\ޕwߌf(eٜ_S PLn2j$@p-Ĩ^/X?/ОsN U(~A S/ڟhI㹠D I [EU6jhTHD2iĦd&lĐT8D47`N0*,,!:Ң3o֧_烯#x 9l(:x>{TP hTpwšӈ 1`覚D(*MhQHP=S;FE_UbpfUvШfOՎT.H"L:"EHkh4Kyw/#`2w$B9Xw.=tV}tfK0ivǩxΡ~3!DU NKT n~"]Py5{%80(A~Gߢ$!9:Na':SaKB\2d`z I߭pajw*N0K {Wuv1ξ{8-?Vՠo{p,gWggIc끖&pc (7Oۏձv: C UF Tw7ד$R]OT荒$x1c]P5(lEAho4j{zQbB9>kӘRkj.ij zJe}Wo.|fucgԏ:Ixa0T!4r]QIbQwc<+%mZ(9H.4Pǽe 4UTXX*jâFkgU~<{u#Xʔޠ6}BI1l<6)դ ҧ~uPj_oS#nJpN'jk z6ݗ6@-U݌oo~N[<8 Sj[\yN w1JId x  RK68WBHCoQv82,( "*91s^*9usDqaS - %w#5*N)qHi74tS Y  82+6Nٍo]5_GQvRPyۇ>so؛/8?1{f-ml@(Nngw~:E՟!՟gvvE"gg};ũ[fD5|sSCIK6%[ڔ鰀QgeHaX^vmXh'Dt`ym3Z#KjHsXf٘MF1FEemFmƀ%JҸQ!1o "J! {NC!;/S![59BzP.n\^zEV^u4p]G5MzH頇t<\(t57 0)R:r?κ (9c _/L]8DK SJ^G,ѥyW鵻rZ}7' Ξ^D#lҭ9c[Vlflux^iz#M?sʡ/׿{ 'j'WND,;bq K'WޗnͿ"N'?rWkg [ >mhS|BjF/(^N^N$썟ާ/TtRIX d$hAbUV! Oivqpo06Ĉ 1Sbjgp}1if7nc-pp,#j0n) A_RA <מ]Όn!C4lr^(I}AFc+ Ƴ ÷@.L2:Z|N t W8kF @Y7EG.0T f*x/_#ߩ_uWce/HQJO-oSjY 9wVg_ZI֒*)-!*9~6Wگ-eHXa׆;B7c#9 S O1ǹa]5cGk`ฬ 0dsfC[ClQƥp4gd?H O%աTqi0 ơץ60sbLaX:VF$p9Gj s CԃJzLt2ZbK0>D$R!JA }.=I,G?-OK`kJtr}fpg;a?Tm=YS n*{jz {8? yqR@3Oאl HCdm%evVY׫ڪsu/J2I`_UPc4_o=8gkn5] NΙ؉10^7}*g/ohUgQAdM i⊵X?o]:P՚[qg^i&禱,ww8RyJoch"IJ,('ZE[:$Hq86@/1Rj;W{9!213:Q>$Ldb׆WnJtU@pj+ :;bN}VcS.ڈdCC@Vǯua;05f}&0Kġ=f÷șY̢^^ jQo/]/WRo9z9h t^O{Ix*$gb(GX F5:`=f^.r9кjݫ=w`XbЫ@˄77w }?c;ݑF=?.pՇ|1>yW(~oo ^k!E{1G%3 pi\}jaK&0]&p{kO?_'0MZ.Y<}ލg4hk˹sW:~ߴ͟u; QmICB*x,`HOԾ EWjF+ِDi y%ធ(K9$u)yÛ'> U0A|C%T7E4Aa p2ۚ./Y#`:b`.4p>Cx8`=ހ ;ImT**\e=sT')!V0G[iU]4Da&rETQ#90DjLX`Jk( 6kHw[Cj$Ű11ؙ1Z)4S#µ RRE)K#w!qTh22HNn7AT(Aî"2餈ɁtPD LL*E]zn,I(.Xb阡hQ@ (#e`DRʒZBCD!ºLuk| uBeS* tDhRXw9s$7d_Iq6س,۳'{ʞS(:TaG3ckh5D;Fr`a LX^]5ĺ!i 5M!z2ˑ!6]5tZC@Wհ]w nkwZC @`f] ;3ƔiY_pM1B-pS[ړu #@B㰶&<:)Zv@oj43H\RXOHC}ҺfIINMD1kc4D7!:Fr@ :v2;}yصYCP#9lN%bDAOce(bT1dneaZ{6_i_Sŀ?8iQIK&0]GDzR7?,QDʷgf綳3(9o{ m%k{FozzK[nV;Jan5!Uk m)}HʄD8YCҵ|c#0ZAG,ASDN};BGܜ$w$:X|-3-*KZJY3&$;eK; 'D=Ҕdt) :,0Դ\r"+y?_L!BLtzyP`5y.xS7IdD*,JbV -թ<{\)3RigG' &oҁ:? POw-7l^jr6u 'g|NLxhn1qtpf+PiJZZ-GJ-IqsXA誈}UF*t`J IPI%*iSޥFa7nwRf+ 9 c8TàF\YkK}[|xdv{{L16I6 '?M(W 3(zŅ|8tm>5jhsM6떆M|KjZ࿻篎gAMʒ)Ή0Sl̪,Qu'񘹔/j5vkal^k<27<$ E]$@.VTmwٌR] $Pnk D@6hueHALoP, f?@7%]FjxZ RC_@iH[0魯דʨ& . ȶ`p!vQM"5:=zxAKm:g`2, 7~^>6W@0ޤ̀?,]zlF&2w_e??ˏ0yDʹx\|y5 <¡ tʫ3'?ԸF(\9 a])bGi!H TY]֕RbZI' ^rMPbL? & SGhbd >M4"g,U1Qm~5،5S"pRM7[ej^as;;fQmO01ڏx.>k~lx,ZQz=r(O$1ȳD2 o~Z#GJE1vQ9(YFJ)y΍bH9[+""dVdkEVf9/?_rƐ0JD>aҰDk&q`>@$ǘ({m~Y mq^wbsI7i`y*(дͱyrIr&*gU@%!pdL[q+Oq[qo) qq˨J&Xde&J P "n%uq7x ]jFQ!ekIAnIkI䣱$S?֒l-֒Ē<;̮$0#lbw cUi,I9m7K"-ɂw*d,&]5RQ5fU|HAPؙ)$#iECɘ pd6/lδZW9dZgOu8 dT!eShNRxcp6Q\Vg}cJe~ٞ6_Z[2 *km֖lm-lI~zv%[[%7%KklIeR#S0ibgF XyZ$#DV*mɂyd l@_i#pPb'U GF BϡִpPu W0Muݓj$= vOv_yoha@fYXƐ%HiYM4Za0W*21H:R};,P4[RS;eCtvV.ٙ99 qsR8uN)zyu:ARtNofTA ӠݘP1Xo"Zo3EZ3Ȇ!firgC5Ur7wyXZe722&{HgҤ^;LβOg$IuDp!",=:Jp rA̯|2!GxdQ<=(7>%Ien甜fw-s3ҳ8(֊F9VRۂ-#AUgMl)kwdۢ߷B&0'G#j6TpMvF.d 5"" hK$rb+9U2A"L'!$91S(ל}8$pI*gWO{K0%c2T*$)biFA iPPS(u|kטkDC+*zR9aǙaY 7NO,u%FGi Eh֨P 'd0a6KSõIS*s <,,`jZÍ TZ8je`(KAm0ɄE0$,7`0.ꑼ00ES6< +TjFJ`u" ԁmDP3魑WfJk]Eb˺NTZŋT@C wfV.f5.6GD9Pn,QmϢ6*QG-rQn&FQ""~"5n!4EQ"6"&Uy"5m^,;qjͰ` ދոGQ oEErAFx"-YM6X{0˔8k0@pU؜E'd%23"h@^ZuЍ,`G_ZM7L]#L PR)Ge@n1a7LegR`;P48  mHЇ~o|@`;9c7_tؼȤ@Rz ҈37JaX25z*6[Cfy?qwԅrۆDI9g36T1?>x U@0M^D{hC2Dtr2?ēTN[\l-vU߈Vr4]=I|9o(Ob߸R:-uruR+H@J5w[٫WuLm+.8,Qlq՝xM*Q.9˽l>OBtǃDb%蔱TCC&s}w*RtjKWjOlJP\Z.?rⲬ,xc!OJ4s~Ǟ PyDF!~Tc䆂I7'4J)PclnΖTz_\s;\gqnr6ϚVWKz>Wj:}L㢆*ʝ(8s2(@/R~u RqBWW5@Tind( UA<f7[61Z8{Z.삑^a2t< 3z`1i ;Կ)F~SOC&'0c܇ME=2̘)P>-E|d6]iͰ֌h*D HjjL=՗$~/ceammnKhc&ƂVd8_]L3fR影4sAJ,Hx c$"0dҙ|o3^x.hxb51*uVJ$q.5١>ÔK%Tf!H\s|cB`ahEK()tIP HxL qQUlTHIuiJVxFjUЦR:5 ǫ O&:ApMJTLb} q>  )hϡ iSFnb'$esFc `Ai!#$  9U*E˽{jih[[ (ۙeͮOcx)(o1Ίavߖj$OHįV b50R=SW5o[}5p%yZ&uGJ~~y__ i>V|Vd~b9W7|xM ]>tϬwuˬ51fp'J62B$t4Є0@ woG#$C9BC!˵j]c|x={ߠ\#GJ6f;M!}a\4=r:v]sՐ$Н쭟4Mr|,pB!NP?ctpȽiEf{sޅo7]V@ڴe#3Z /͸u?7m9scV_=g4gok9y;)yܭBuKj9Sҭ:sN:Cy^<HnGs!heV|Զdp Nr@yy@+?[Aq6ռ7v[uW+*K7?vWڈw+2Z˷S~}v;s.{\}pסG#ъMuN/g> ϕLPGJħ5׸θߖj ƠvN$@4L7BrtDu˄fZާ4QrXc*6~t{lSp{d>zvSG*MRg0'2&!FeX723]QƁM:w;[$?,7 =4Bz+ǒsp ɻ׼g<{=XH<[6-ALoſ_^SX}v0Z=Ë!_v5?~77^"Ejd՗_C—M~v/ހ5.L>.#*c՗߿}{8?kjIo1H7ٚ_/-s=m/ }նy˗G3ŭryt9^o/41>-x z/ݒ$v}F﶑ۼ| d?j6pz̞LjclX~ 7yN[ٛ) &B{y쬇~ȝ2>,79OW3j{Gg4$ !% 5nH7e~ԁ>z^g+S<*ɨ?)O"ꩣT %y߳:E"׉/YW{˔TA.$O>i]E`T]ioҺ@ׯՄ ^7"`ALX~Gň +L$ :0Rj4HT7A E]!>H n ōb`kV ټ}Lb&(&f7藋7.@i, y.-~\Ư5 *)pyf{*`uRG9[E8I֛ gᦤ*(AUD w5S{a57J<9uwǻu;%&q hlz 9&qYYn=ufTIV1W[Ų}CƎ1YMEΉT5%ATM*UI2rR!Tg*u Tf6`V܄Mڛ%sAh\y>U C)`~5z2踽$$ԇ@-2Tlb ?[q-zK9%vBW \[G"OgQ!%"Z9o2^-5zET R Jk2uk­H,KD%EaphAA~:A)h84ƨrj!wF^*6B|qI>epu4߸lgleO>Q6yZDCAgFǛYXda[,Og֣Ns ?o_I5Oe|(ӗvߣ_^V֓>\3f,7Z6uw0x7[] ʘNgTn=ԩyڂnMX7nulJzs׻isܭ.eL3x>Wƈ[mAz&,7mJ)a %6x]Z=Y0]>,?j#WerBÝoc\}r7R57 ֹX {~z, {qʐMCpB5{yQII<dԁ\=IErz s$N#Kwa%Un{-wRNM4fiO+ֹi7:xᾜ#UP=U҃:Ӄ*nNHUdzTt>M;e'[_շ֩l{M8ʅU1Ds,1\ij,*IKy_[i(Zi24!$&<)I6 ͬdE<X’xш*孎]xD݂0T ΙTQ@U@SxD XtU1ӳs:N 7$ 7fN+Q3UdKjK93B[4qAF 9F gڛƑWQvn-rW[Njwv\6S.]dIGrT5Hʢd"ER7l"_7SoH^GSD^RSԅH" Bm iH*b#P'-6֔F0a'V*b1h_JT\ng/1$Ʋ O<<%xيRڢ\݀qv.hII c\jG"f^F"OHZc.ެ73K<#n+Hp-'AU(v=7}v[_e*0J'ꍝ5S !_)5Xy1K>2.3V0G0{k ?rk-i"+*f`:[[vekRJ;l=SjBCDims.4tŒtvJ=q@6JG0c<(ᢉ%V i=ܭCmQZM#g0EMT笷Y'YoN8]s;YoG%Ÿ$msQ);wU+<*M tqB`|ã*YSc0)8ƨppXyE ƹ6i՞),kK*P-`jeN* BrO(C&n jmu_P0J Qr<6| >ˉ;s gE0m# o2͸kו4;~}?D~aeJY" J̱4Nx–?|${Oz9+yg5ѻAG[o`wX$t 1o| ==&G|cVghu(/C?r`>jZ? ɿ G#`3BL/&M?M'c{@`_'MK Ǡ:xJ^0}A rwSk^]||?&;3Lx <6olP?'<H{\O'ɘ$~7a{0Pζ-"l:+}h255Ǹ:|5\\/0 I!*i(ͅQ?iE3ryK!%rmSHN!ʼnRHuH Hѽ:RNz˴ ."c?lMg 7]_E:۔xt69)hzķD z@Kf t7?MF|@UCC:4Wh]hʂhJp%IYo7i|- *H,L0VI]ԄCw.ҿ§A߁[hΌt<14j}xaް-}BQ/ɗ_?bwH[Uw5/_]$oBӉ{ܳ2NG}. JFYW-uVaTnKt>mAa$X`Ȧ92ͳ5F$74lgfhFflo.zY:, uޛ1'0s96K0M Y8-Jsj~%>)ҕ\9GRB8Gإi UOq`BeGG`.ͥ)% ][.UOU0D4Y_z-T=sFQZL@O1Rh_>Ĭs[dKŐaqPN;'[Gp;'NBa4}S%oRąWϥkYTb'8]s7 1;f;qe%aW&eLfJ6LuoqyK9= YЛp5"mUAD me_ 8X(bo- ߒb%A=[U\PQ@ ;A&1%Tؘ:lRaEZDB)009\IKT};Uǥ*bvreT{TSG#k"c,>_(14.䘆 XbȺSS >PˇR쨜'桳 SR*U^L" u40Q:C Tm@Puqp(J.X ((<$Fp}^e'HtQsvf 31U9jGgwu#2y, y癷3H`!;ϼQ\坘xIEXb(6D*Sbf$vVHD(bd)@.- .{4rMsm#]g;v%e}d5 q#ޏpܰm n{!/M[zbvO,zbqdd{VrLω)I'xМM~MO٦ԧXI=~@)j)+Bp`b{krD\a<"zsq׍v&BO~4q/+`1a6FN]I ˋ+ÉyX,>J9ykF~- KiK,R+ҁ,tCu pB,+cy/?>+fK$E#9@& (1@;x*l2YE)a[TbzWeVaVOK BaxzL4C< )n^HqsѾ+`; SBoq[0ގð`t!&inb?o9+f0F*"]v]-(iʺl3Y wfbh'/&kΕX86F''}ψ} YH!k) 0TR Q"R?bT!si.,5ǘ{.=Vq|-H9N~yᝬ>9<햣}u=]V a Oه4GZ0\%WZ۷G )G=4:P=wF{>tq)R4NaW"r#h9_$Y:d@lil_MN#x~LT˱3rjQh3YO>H~:d.)E~s;#c=k #!ga>qKjƷ;СiC0|< CIlMC%N@O|W`Ry#̀΄e%&wtۏ~f+kH.SgNq~/ f/T60xq[g,Gb6bHiAr%T'a4 QeWMɉnxv_ʚrUP6P6T`2o@捦Yⱡ,A\cHUy}F3=PMQ/.r$K\%#c*MF즨)%YIq4K)q%ƴ~xȅ,AR*wyAv S8YGaD$@]߅/Ii^8.V{wCǸ_fw?<Lu+'?7 +4$4vPoz4ȇjPw5_:ra^GboS8n)bcC?Zz>~gGwMZ R v4L/4/\S)V%o"ЈD07ֆ}dCЃUnwŠF!'|󆝤ͦZ-OnyNj6#vJL`*Y9 cO I% M' Ax3LԒeMZ AD㬩6ˀ0|: #}gQ!yoչH$x&*Dm y7o'ƽ҆ԛ6~|J۩ 2y\mLQozom Ruջ- ⲗC- YS:TQF 94t`AjԜE*o7={uEyۚېb-q&5KKܾU4O\26T> hT^]S!^ӷEjBg70N3WܖKvS" R8>Vn嫈v R ']uǙ-b=b}[N}m{͑]pj6H` upQ97hn1.۝ɪO%A:8+q̕AHjҀpV̟L5)3C:Ȏ6!JE;F,CEzyz'KMMRێkcu$ .L]Q)],OvӠɔ- ܥYS}8/^6O&_f-IC>}`POWg 7DͲ@B =S7.-rr# P\(65b1rSci@e74!h*#By}jl^2AuZa8tWVŅC]!ZgjNl&t[Fqܫ:sv1Lj1#zwy^SPߓh tA ڃyjb/$Ёlu*| 'rdzyka:y B$vsx2Zyī8cݑeJ:(ng_PDJDp4uLٸ\Q_ĜbN5<5Q].zqbp(NH81 ʍSQMf6w:KӀi>`; Ms) SF$i)9(Ma~}SmƾyK@{B9ĩsQ'o@@0\xAd d"y>rɹag,M~(#G]s63W^fd*d/U؉05)aE}ąyt;f}ÇRLiKkW}"Qld7k-ZB2G%R=M b5+0!NNi `m3q꣕LD+%vdCs;k׬@ `;.YoSQ#:C N} ^ejl@I:2ϸ\Jyz7E)zF $4P'[`bV@K؟vϣᶬOZV y c]Qhb}JO,|-}ct u,EWu.^7ϒ\>L<3?]?+ лi7K[2UE<iۛ(Ni?ZEWVc-#o&~O2KCa53YT`6LovoČl=1 |&)eXG_M鯵Yg }MvwN@,vG?[<~;"BBqHP,BSGRRS0o䍠 EȠ9<*4:"^lE@TFjXrF1M$OsUjUH@R#ǶB% (HVA흊?ׯjj74N2Ri>_%2.qA;۷j򐓘xϟ;4ۧQ O Wt`SZ̉_b,p[z]^*pte,?$OS5&5L*?jOMF"%YXUC&Y<. ]@g6'Fp9eۓ];Q@(qZ b!/k E$ʶ]HX Åw "Ow'r(0ȉI-F|f/ۄo0@2Mn#䗝 yw.h' #yP%H؉J*FU'|ŬsQy.;1ktP0>i%M*aQ=L]H!ɍM>Ers(sl<̩}wcL%V% p*1xL2ALus rSB?U2L7yߩi;djr:! R$1@ Y7vѓo2{͚)tΣrrkeeyԌwL:u Uriv]zݩ:sz ػguwı)=rw pG۲ 8AvܤpQw|k'Ita;075 ??ԬCK)Cc~_U*]Y6,˰òՕ)Yyy}2)7eeL]z1Au3^&vgZv5lU.sĵk O<3 \QBȁ}?<#XYgu DѕSr7ϏI>v#/SNw*w }!H@XĻ1LJ颀_ݵ:8nOC"fAQԍ4@.8&UnçIG+;n~RQw>x9`nRՕ#9r,^iS3|tZ,߄Bf)NL(eiu7jKmK~h4KF'YUv; ۻc5%nΙj|E0ki^=h@&B[NjnYm ݸrX%Bk_EK-V~Ak֭mfb?G@,L.>_7xrG`.v>mlg;xח,v/ow|^Jf,yk܍˘3 J"Q@IU>Ӫ?>aHTA,9?Ǔ3(t@p}jMFY,^@(OfMjgFNL?r6 卣(i܄/v>湭B3z"/A/E ,O__h: Eԅ`J]|hrҚT8hatd]/* s{Lcw"-L Yj+GRkIv}jdpa -)(2ZaҦTiZ E$v0Mf7})V)^2]*!X ʌY|$d!0 D1 A7#EJ0/ $?4f'kvks̨  RTy$2dfQ"pvk0vLzA5U]ȁBduS(= {Bi0b:уPRLĆ=v BHR^ (e퓡Qt޶h?n OBD2tӴ\Kn"0e^O~דfm=K޻RZ-jFUXe9~̲Ezw{ %L@n ZQ~Iv(  3=zRPț^0^цdϼω5Tw~\Fu;;ŨE $@#R@()ʰ:9%*f~l\Ė#6噕7]3ff-/E][bYR-m9K2بy\u [=Z6|e͵̡$u󘩀š ZF$J@HSi8?̂ ܒy>:1Nrq5LL:є]߅m Jm ` %{#툇j_T8<߉g Ve }N=n3u܈Mk4/2Yd$ 1RchkV9Ukpj9|I7ZV^Fle_PEi}!#*Ci=yC@@B$UB),A4 1JX MRrGȮ֣o37&$UNA֐:pۻrr_/n6- sI2.oZvҬ@e'5kODt1Q;K^6IX:ıRZ%p b'Vpʳ-F`X~ڃ%gmHVT{9x'TyFaRw ei*"ڛ?ul FA578:,&c&eI\D.e ޺nW&L1T%M٩b6A1W{I|p,S'J>hp\I.D;a$W@N:uu{w *!>=j8q 6ؾ|Y#A:dyx1EGx%N/:nͧ$){m*^=}{ܖD=CG{ᓍI|nzqyA?QM]_UbݛN(Z0fdd{[p%D%-88\qP$;Tד^@H`ym[żl8RAyw 'OْH@&?$ٞ}/I8>X. ۿ Ol)D=}\`.w>gw ˳l̠?Ŷ/@;oN29K7kn^ ؗ|}Rs<e}|O.?=|(D߰3O―h9 u8,! xuP©*{nJ%k%bvS:R\1_βIO!펴[NJ*ń긭 % WrZLF ~7,+(̰qOdw?Vݿx?_VNCpYaσ,#"$O?x~~w=۾{2+gd퇜!Ptd8: TٺbjKw1 62 l}^ZNVka4_ I߮2vw]t_oo|{ 3kGe2Ќ,bgQfEQ"0nCQ1!XU|DM \aDrgMY]F8 O~J;7{47o{}z2}C5 J$}Nvcp}Iuj~4 n.&0lXܟړgzxpF_ pp]a'v{zӿtbfOˠYLH;#~U_Y?O˛gwww>9{6Z#l$S H+ 0p#&=( Czu-=G{9|H sLʌ|nàRﺕ$" #t"ooכψtj?l|J={w7:߾_ọ-[4\/vMlu:$E<3<ͩ}}_) $Bܲ}M*sT2f_0/m'ؾ ex B~k,iR_4[J ED.TJ)@} 碦L#edI-G!k&e <,3-)#m M]nTm%pD Q-DKJ$8B2jyΜ,ƪ#'0<_P6؎w}l'xQPa'& R!N {{}_Q7Y:+va` wVֺ"JĪ(K%եwWyݥHe-RJ5VcU#@r`!u=K-Cn eKsإ :c#6{5}'u -"\JJQWռVdS][$D&j&7q3[Mb83 (aHj]%u h/N8<@'OV,IG2jc\޿UosQL)*&͐Ji{Jыwޞ[qc"\_:wu.+3Tp8'SoRpC?{tM Z YFQ/1BK|tĵ}zrɘ>B{yqj2):Fw']y^8 LݩDli9\ +>*upf0C5#aL55W+u6H14XyY0T!bV ԢhdaL9019b0)tQNQ4Ԉa P(f27vX$MldMR0HAr"י2[8b_ 9l9(tk ui3јq-4x]+*U]c&jpgN1;2^0UיLW, ɷeZIN-Ji&5u›ƹ;RPuƬ廞?&UƎ}xX~=$=t?k}$www޻W.?o-Wo_ %{'lߎsfztcl) Ϳ#缭֋ '}7yoۭЍ?)+[}C:_c[٪_/:r[0=9dUj]rF)6J6WXoQe eۃ*(Yz(jťPJQjJ dX7D* %TM- X&{՜>l90e+N6s3QTPXH&$BUAr !(k ƕsD \ ikYj'8j77cC|י2Jl#)z61eb96z y*SK]IZ֍se ()sie4nn y*S/l[7Be ja#w1pf4wACf޲qӳ|ih\d,*)t'(@PwthBBHs%|ִb3:\Sn1ovsn<׍nV]~.qk GCG~H5_e]9)qPpN![vQQj?;:I@Ayܙ%j+I%rʚ [l+̞qz5€kpk M흩R&Q… n.DjX1_d>u@!|eôM%\ ZV'S7r`*EkJ+p-1m|XK5 S0di BTK @.l,YB J['UM@,ؗɻI+/&ILqRfS9 @H˜ðdD֩̚lm3CbF);\ 8%~{8gOu3HsH 5.WZq6U̔>|/Hg{Ξ(ZJ'+FuEQbc"n^\T)IW1fuG|5nExj %1,r+z?b=p?.D={cǧB,Dm~h1#Qx?%X#:}ڛhگ_J$G;Nێuc+h9ӎӄ@ʀ_}wuF@:^zj=89p- fPCEγ^.`/y)8/5^I-Z>;%6eYO& s /-|.rB'SMA/v]w'pݓԋ'~` ˋ0\!zYFG\KH0td@^]X8&NK,4,@922/tܢ%9RM`v-3)̰m5p0ʈ\^ WA:Y/=y g$d7/ 6ɶu"[$S얥֥mx 2cYUF 38%w|\K4*tE4uN_ M} &`9c=QA0T n#0 y*=PF%KQ&}Xy9a7U7e:(Zb#p@/f}e1hy+2Yލ&.y-5h%MiJDxaMq4?i5p&e>f~1d/d9_>K ^PLZ&>K|B:*k]Ŷ?µ#uS0<,/Ljcֳ,F|/Ax/}9_%4#pWg_%ws9:R1F/.~q;2,? :?-uc;l~Jue " Za&샫o6iiOmDvVx@V:OB "G^0c3N#,J`^ X,a(Lg Ğ{ǜ$V:G7s%dH<8*;0 ńp0kBZ(p"}jrA%Mb\veNPJYXͪB,&YYg_eEY9^2֦- _=U^*kr*Ju~?6 -ѿp+}4]:qge;SX~1y,g7O0>,ڀ"ʰ oN;E`("E4Ugٸ1 jxPiuMfv) jh;3m']"Pnᷳ8 M?x$L7ELHI22 Q;D-DC¸s$9NU$kSN&iܬlHϤnЌnA;L3i1zбU'D گ(:ZS;gud쌿=ਃAq?'6CFB4eٶCGԍ,t{aK bL @̡Hcv?9>n]4GBZ\xH4[ ,z*n'|cw[*!cjj׶Y{[d41zWrhwC䮠T0b?pZK*8-Q+F%_v>8K;sW>}=}<߯G'|9q1'&-&]]+no!|h^yO!$YBjpExx/It͕%5c]KXV[z/ɬ]i+ 1MlqI/oϐ79|/G$ %YEdaJ`D ۸գj2Wx7jtW1NJ~{xvMgg[8{>ݏVG]NMWgG[ނg/Z9=Ma! ' dwNyO9ʕ.Ç_˳X ѭ ݸJ׫PPekֵw3ldz(NAW&]ׯSn"%|=SD*)h+i5CgaKrΗ{= a_Ds |?>!8 ȪS&h̄.u0IfI:e!D$Dx5o_Tp_C\I&ʎ==g֌ڈatD\m_KoRU锻ч|aXLn fDs4Q Qƪ7)X2B0.~ /X,񋤜]T~pn $ )l_E9-OY%,ny,Y#Ӓ_xan^~v|4q)\lla )ti"5`-Oݏ_/VINO1ђ=c1?fбr;e\8q/1ٕ[]13&1k^$o}4WGHLm\oD~Zvdn} Ϡ1J\ѕT߂4GŇ]B$%)ջآAiT^`L|)R=4%Rց[h*`z[BҜbzvJ$#%#bw=iK\V_>(ʔa;Jbо_&1GPN[rs@SK2" J:͸T 1VK _է5U8sDAJ#(llsh."ɐc(s<-<.%5@B4&iSSY֋Tؚu߁\HbW;0>@hhRL\d +D!r\yro?4 $x.k\ZF -8GC "LxCc p,(4T՜b9,jKaZ4>j!<J˽”+,PT ExI%cxG0?=LUXLO2&j.zI- F4xc+c T\rq$L+{9 uTyfds$s[3g>˩fD"Q70{ԙ֒k>=1~g"&5x2p1ja:TcAhAفJS )ˌ˭3; %"1uK9K(傁vsN8hFD{RHugJEܻ;?jfSP@*=ZHJ OE 1řc#r?V!c *ˎT33R!d)s`o r-(1We ͐❂dƗ.I@Kp%` &xy c\L1ƙT;iFPD2p"G3H<'R:1)[ _y5`|sҴrJM&1=ǓC DD\W%g9ZFqƁ)BHA-\{j39?JvٻBxU s]5v#+Hyz` -AӻkARSUWωh觨r+ {DvI =GCx ,' 7_+{.Q4dkSnn+aـO_9ⷫlfz6͡Wo^.]tFBeǜdf\!ec}h˸60=`=\f3K `I=`*nr$ӄڻyinG ^k?SxH[ࡂ*M ZR >#B&DqeϹ.sXL.(y'жQCdѧ#5` %0cG}t eԑDSG-G?K4>K4fpDeBEQsٛN\*L*\⡵:W9ᄦ3`@9ED67  kANZ5-Eln&J{6\b9sύݹ޽%s$2xh(+;.HNL}r 2P >gx#b2B"Bp*U^Q(e o_ZP*:m(67{h> P D QϐPźP);enbcʫ[2c@){ l9<fQ6Hª8_/,͚5cz)C^ *E 2߰sCS3.e;yh 0al!cw=:r;w+`oQxj7OA 5̪BW֪hrpTv?+'u|OAL9^>saRnݸK]RZ`ؗOЌ6M͔r[!WJrsljhix- -A1,*sz:#AawKKєVy1-pª) ^ ]ﮚPd Ƃ:i 27!#6C4S[=FWH% *8G/Lht|MxvB'M .6GJu DBd!z<V&/6[=qϥJb>R$nxJr1]1KJvq̪ Pl' V:QTL!5$G1ZMz3{dKًR5s Xip Y1I)huBw넔W7wn@G R.:!iz,B̡c%CB8ƕSΥxK}vRi JO[oe9=wx'ЖB>_4}^<J@b@M_XwK{`nf X,aW%xSp_9` _>7 "nB,,bv>@2-3OVgJ'RքxB8(&8)Gj>ҸD^CȋnHMr:c`dr;?ɏ6umML41Uㆧĉ}Yo LJQc,g1"9&W5Yn$0֙S̡K`tONgVOMk0p!?~x`Q/Yez+|0UZ#_ޏGi:_,WlzlҪwDȻofvjv~\>l-'ڠ_;D#'n׼#5&0.ddrUHC-nt\wT -g6$eUWדqA1 ,;%gj⚦\%OrcyySd2x~+0zן#&9y!} 7:Œ{N#ELL)A-h gL3 -,nVS$)dޑhaKc4e`(y]-CkmwUS4;G%%#!} |Ptq,43 -,h! h U<@8< %ydP4۹5|ʱe|iԼ!haN^C0!P3LH,M28d2 U_ο6¥h=Sj(~rlFw16/Kp (B'8ihiK4~u+|bP]]\~U;q2HT͋75ȖJqΞ!d븚i3_7'Tv2ZӂEU%Y% h+$Twl<ꍤkzL͞" Ch2-G>}NԉEF#W B57"Q;w-;p~iV=;r%ynn5IwߛAȎxUmŏgmBuRun ^O]y}g+Άm}txerCE]V@Yyr~s~(OYg/v~:իOsiEȃ#߭P]VyCe#߭]VynO!e!gdp$Vy%'ENH+8#p@(& @mS`Kg%Õ uעN,.A^^J'Fm_64/{(Ԟ|o&'oPت|߮#?$=*߷e8Ӈ}L4űTa;ddIFZˎ<'(k~?M8pn7/Dx)m^Hw"o1\z$c&zL\ G@(FhX)#ORHt,EEE|kiMo]smM|kh'NdC;iڦ43~(I{''ob8OQFP8HN 4ҩ4HK_&4׺ߩ=.nB=i5\R7^hL MmkC/5?)HB`)jzH &.S==TډvM*-E4EѰRRBF.ED h1C@ hQL) t&^qxR\a'l^CX}tG¥:jͷ%@H}!ȞjoNWэd=r9 ͷɁ|3ߘ_`*qet~yM0Mб}5\J~4ӷINrpkfa@EH İHm>YQ?ro" ʐ0@˷fbX _k Cyd:ui^LoD$ ,z gez[~S01:TUgQ.}Q(|M:tGQ'L@\=68MdB:jjB&E"ךRpu眇'z{S ik<.&EQqPEPg5۠0D2kqJP H$Etd!`:PL1'Q51RIb,SHC ) #seQheHvjg鞓bbId/uET P,=%.J^@K#VɨD#Rt|@ QQ#"5,44'L <߳IGF!YHc]FT=֢BerFdI" xt`/]3ݧI/&fm}TN&j,0x0|{ 9"9+Q"1EsrĊ%կ ' Pi (1`d:ѳ,qB@QP*RsTr-rA֛$WGTTqR*I0^RX%:2rU2 W,xNUj}$fTQٖ.?v3v,R}DzsYX<"dWgcFSCSAF~mX8z6^|j:a3zt~:)c4o+r7549F(M(?]_\R2+=!7R&+ڶb<^֍Cw-yK>_Qdxƍ޽111hG;[~:EpXni@CXco BA/Xy=!ݧud^Dhx)hw'Bco^`t 䤆vozs&O ;=3=է*x~yQ>gXܧ8O.,KOޢn#܉WsKdy]^/g-G^.ޝ<Gi0F>~7 S;-uVw<EE͔2U2|=+!zzO^JrT7 $Kg[E+ɞb5t_9 I.L 8mo$lܺ ؟zXqfl .e2{?ڻJ CBwAWձqӚއxL(?d>wx8Ԑy }iR, W?ƕ} 3wl%kɾE);wr$Ĕx 7}ofo^*$;nt1Ta>XNyL,b2Tq7H-(M.E @%&`oER̃28-S$hSt 6i"wp?/{ԍ.f _Ffj7gZ5MօZvitx%Ap L5/$J ݓj'thwII [ރ}N|n#Q1i7[ ^  G$Έ&p2)ל(qbw 3X7K#Dr&~rh?\at3M$! >rEG"{H0*N Q)\QmM-c%2twcˎQyS^g~ Ed[R+W1>L6v |GIdN~xN% (őK0tT82pUb:WQ@+(=)GzuՀa:DjA"w* 1R*)ȐdQە+jMt#?wedzOr_+!X@UK m2ZOpF*%RzX}<~(X3NQ҅pr%V0MwE=yyGSWRl_KhAiOQ N'`\Ŋggd]*׹ϓr̦ߗ骿g;Pd4gI4jUЂxMكRq%8b9C*3Z4Fd,4Q,7<;r($v+W^O& dKm ɾ]ϲT+~dtCJ?)WkͰNk+ )2.WNT∧mndb|+q!udcY ͌{Mtc-=$YI:F1aY}eKf+q2U'HCl3Ylc+{ST:mbmш-wCslyl!QLMkZ[hc+[ X~VAL$BPL,*!D՛lHMny3zFI:_v~3v9Ǝi]IuCћkD@ %o-׸5]AhSYWƭ~7RXVڊkι2K51i]E0Sȹ`n vj<ǯ} \,|\bJ=(ͮIGL?#?Er@|-_?- (İ>iPy=u`8a6h7|~6.Ӵq4m\Vt4)2(͉R(e0/8JPPTi R??_/IYhԯmb(;`Mj?ek`c^9P[:;ٍgZI?>e.w|˪W탣Eż:\[`iv:^ZGnCY~𚦎9VHe"?Yg`H[K4ˍic Tsu1A0![ENʠA511"'1ނ+)B$#,` р0$9Kp{WE@ )z7*?wGqt CpTZZ(c6\ṕ[xAՊ -k)Z^^G *7=DDL-"ra.iE-m[֮v1/@TVV#nc]u:XWA+;89c-kq%JaS^v8{r3/9^ m{ nszvן?lwkEw;I-+~G!^7*Lg!SoQ!Wo0틏Lq0]]p OKk} =px]]b oF3;P9`??=F#\###7<Hf)ցe&װͅ"O%ֻ.+u~#uژNe,aq^G{` SSZ!:h}Hc>5SΤ  ΁:Alb)8*_G),gW_: !:3qtLE`Ҏdzw]=x)7pgʟ;;b7#{l<';_yUj0xԼ3RXZiRL>۬nFNRP˝eqA,} ΦwnqJ!@ g2S0f>[`xJ|㿼_i?r6] (kQxnhҔ}ҧqHƕI1׎pU"adB{0Pڒ?WU5q:WQjzR$(ј &ZJGSx'1Nɂ*(P"@F튴Ox$zb&"& @<\G 8JjPY83dU3;(,2t"n4 xf@Q`/RQ$/b4MBI G w9PAYnX~ g9R_8_ 1sk}M}r|joc|{[]%L  Ü+*5H,ҺԾsXe/hKF&t,L`>`Q\d3 e&70%ȝj~稺bL\KaCjJd9LGegϣ1Z-̂?Y} S3N}!ƴ\$AAJx.,[x]}Ys?댉>mnr<'zeG|q8^=l58Gy=]{Z4WJ9sC.!DLKAn Jb=[@[@`[m '!fP9{`grBl '_i!Gc-F=3mM:{uxIkJT*^(fKU9:Ę{Q(ZE~bĚxhϽH`F7x~}7`E'o8 sJ̾z߽&"H:Clݜjŧ9> ʖa^2ϲ:F!g /UՍG[VW`F:8-ʼ#ՂyG[]iݽXwl3[&@}c_̴]-nMWZ^^_^^]W-]wzcڨ =(*C@ U^ߣTZg֑(cT*,~x8>~m[m{$n8\rW&:ݛ7eGbNtn f>;)Drf*{~L,9΀X}@q2w;s<}bqͭHhϣJҭޏJmkϰCٯyx4ow1!.խ 3-0t:FCC^Cy6pwf/5 urv3jާXebw= Kde$kBw޵57n뿂ˉOBjUݸR]^y8vj1(ޤOHI @\HnKngƷ?7>ep%j=0m 7\ya g0%!eтzIIX~$eId<(,)詒WeGtǣ;:͖V6W9Uhw̭{r~]qD؉+^9w?sb0O!Lӛ/sno%E Tdq(CFqjoND1$zXֹi GE8m ?Nd?NdXd}M1sO{oU :ט'EeM~Tgk9;^o"92AةVsVsVxQWՉfG@ku (KROim{,wcyѶ;?JKI.g3J^fVly:+*KQ,ci>U;K& omCNHj sd#ձ^UP^y^D8KNH)g(E̮rdnSϮN:r|.s'/|,9לa vpglWgs$H(yeRojJosdDv>sRyF^uN:KF阑5X>G6;UB*9E!Ȋ&0ϒ'ṩ֦۟oo˷O-Җg5GC Rl>CNk ,a;O4u/6| >qP l kCqmChFFNb ljJjp^=Rcvi jiVש;մ µ鴧L`hҜ-KpPT VLݎ: ͘6{ mtԥUjtVi?A!8A]7\}sxff YxMV:7_9gTeT &/cO@ݗ`x!6n'W LH·j u-YR@yKV{>U-&2}ЫVWfj%8{oDP }6}Kҹj>6& %iI c*z䒷!Ͽx[p;{_kz=7e;{{m@*Rik1QjBP*{UN#ïQ;6}cзywkp,}|PցaվC`ʐ E+ vn>aTnAd=1JBMO)5RmRs܆ s; oo_OEZ ~CH ZhEIk[G&:Azx!\~# uDPiq>MS>8:x ď ~+L6 VcEC.< SV/$~5cdc̳K|!\,Pl4>Na6l!Du:Q|(.R>hH;("%EJ`4"6#MFeHp8u0/w)P?՟1hTVAه?2,'Gg!gfPx(s7~-@ƈq\) jAo޿a6MU) }yCG%D:Δ*86i77h#Hƻ[F׋?w*5jhq5TҪ;'Bv{ 2;vJjx^q"sou:x(gx9SIg=D5Ѭm'9&g#hib\bVEP <+BV:N5-Ӛ9b%Opw1V+}QǪCW"P1')ZWI `g\0$Êål/88F4H/B(ҏ^5ȭZEèpqi:e8v5gHr.L acd6R!^9pfqFF f P94DX)Oza%StA=2Y:-<(;QBq@۬וz`^; H²Nwb9@&QNDxD؃ÒPc`-ŁA:XQ@-Fq 1#HOV\Q=$5I Xw׋4,Hܛ`R!>?}ya}S+gۋo)ov#)4WaD F$R5D>\@ ɷEVS!ܗ;aeIHB0ZaOЎ}fJiEt1FNrad`bAA_AJBF \ h & >UYhY/+(sX׃P3vF)Ǥ$>4 z8"`IR(d9D@yG^ &?\*Aw[p")!H*9!Æ۠ f܄quMgA "TO=R{\%x""#W`(B\ItbBAH͟ZʴQ鉤R*ZĢT !qQMZWҒRB  ~;7\kj!Vj&"Rhlh DhsThP~G͐~Ԇ(L>(0~ D N?e_ mA 'y?r!Yn|/rYfjf7Il/_ǘ-81#hMCJO]>r[ɲP&ŏWLoo:5@@(1QŔ-R)a#HѨCؠ:aVNV0%7BYazo,jҩ0lՠj\%w粘^7<% r>>\ߚyO)X|mgaqOX^C|#W~^aP ޑwo.7yn|Հ "_>?߽f<͗[X0x!tWέ͟~~ ٷ \al9XrގꏒO/M{RG5$H HNJiɌ#B-H^d?ta.a/Kʆހ6v/?{26_M Hqٿz2{?U5Qg!?<ί_y:??\^\`"GŗJjuqe6lP$>=Y NOWu@օ7пQrqɧӖ‘|͐*HGpOGCR4 nv:8w5x` 3,03肚5.JƼ4żB r<ܨH'\T7o:VRGtGb<@x0i_ Xi$q/5}b%7nQA?Nja% 4<5 /߾ݷ[}g ?ܦ{F#Dr ?k!ZkNQcRhv'~v)i\1&zLAaWr-y~*YxLR^1ϋ3>Y߁77Onp1!Om.XHgA6EDT (H+ &J#HzXlF65hʨVd$:i<'о~( PƵV/H:cA7$ll5vtZ~jv|1`w$)ۅ qɸ@$ u!XG{2 DlgX^?M];*\H&٢7_fϝff|tj'y巻WpEBuҨ%#7ϡd<#q8Qw{*= 0~ ܡg*)ɿҗ\v(җKˮSRaĘ">۩k Ii8΃8ht7Fg% g{(- $̾6>iQ o ?[M֛cHsyѸ6'~w8iMV,/:r2%لHZk54~wV%>cU2pPn|fjݙo^zwȡT`~f"9c{7wa{9aQ1vh2Lt1DTI9mrU- ‡-8okl]fL{ QEcTsv_!~(O##A%F$xZǍppW  &Hx:T?/WzGҮ렊A߯GvVLwyٿl:w&Q\Z6u˳pnAsN ̒HOWcˇ_ާ -W$!*Q\e p7H <3&|s|=$e2nK.t:N );.hp*9W%2!sz'f` XjWԄ\u:d՗S `kZYZ4_K" lxnߎH{0)sƄ[3Dd&: O%accK931R,5J #ik@"'C]_Ғ=*$Bl~܃O0ȬMLf*&0b)vu*yKF9 i{m6!wtM,4?"sDDTŸ!ʩ` z4\85ΥzK[IG-ӖY>:mvTZQ[I\kI[qUw"g&K^~}^wxwMCĩ갛;Q kn.voBE Wӽԅ_D#߸P ̰:iL=WҭDҟN ,SB#q ^)C ;))&ˏ*VB`|!nfO6%\nt;tCUvU2ipKw)Wr- &wS? 4_Z4---U*K!/`7!FGsYqϊ2)W'e-qȟE"gw[Q&P5A 4t;Cқtk]+R!rmS-&pa'P5A 4t;!0O5ov)?9xE"qOG FXfxJSFScrPkR͸r n1cͼz'NJy5[F&KE| 4Gy\?O$#MN,S4Z6JYIi4>Z+aq֤Ě]PDqjt{Uc %Um,p[s~a77&(!{`pW3,n?xnz]`ԁzy( yO.wyVmBh3` Gc@~Clt8f$8zTw?f1fc ~*wTri+(\=,O7ט!?FIWAŸiF{&.;6-˥J2͒t>6A(\UO6VciշWTTKyaM"] MXPb,B Uil`'նr5?rX+@>Hӈz35b &.icD.X<ʍ w,8m/h, #6sICS;U)T +DL 92Vp z,bjmv__+۩ (]ߟLo[i$D$-I^IHM4LZ(pm95Q<\5辺ׅ7 f %إ:I6`LPIJ}?8͐*ɱ$e3  H+ FJ#~|kWH|?G@{|wv{˼V]A&O|B7|h2?gVdV>#Z鋷cT5F#.K H&bC k 3LA)d$,/àk9.NAxG(8 B a? K_wÍLL H43#.32 cfbnS˩4,;ЧDpJWHP ۫F-1k_MQu4a"3+"&>P0 R&bSԀ#) {e16@K9Ak pDp`eIEuWjSɅ+0\#LbZ_)s?6Zwiն p௄JFpK "Q#5UMpw *8ct;)l|m q%b<j Qq=SjL 9aTw8A:$a)W{yVBĬcC0T݊Auux ]pA0A`R1lt bYpڊ16dZ2 F"<9L.k{Ƀ7ça'PrܲlosaW^~ JX-E9EGab@^Xu)e ֠m𨱤dO1?z7?p.MRY48H ؛"'ryS R\a-u02+2j aIɎCTYҧ܏j|_p~#*yt\Hq΅`\$x1"$EXS݂beNK-jw8$ j 5中-$(IB#[v*-H$$Z3Bz<2p"H|֥5f/kD 쁤ucS @$)Flȱ,_g˚Β(F 6`x_k)Ux6!}8޵>m#ej8ƣrJ9\M$hkWf$N*58q^($X3uhtaj*}aԅ/s]ZQ 5  THCv>@RdDki~]j+fDNk(J`Jˤ"—X9ͨ>& P:TPOחky(gލ}!yCȍ *zgLfdᭃtOC\)GR=fi+"4bjWU@,~ď~n13\~%ETIa3)^)a9q\RPgfx;|A-&aO =L Nӕɐڬc)qzB8*_ɨ; '65]{[{ɫ P!X+M;+*F͎[86ǏWFYUYs_;eJkgŬ֘o⪠'RzFWJg&=j!H^;ͯUR.3yAԨny)B.5OK0y`3'2n_f D=|([ "r=шL0_%=:ct,OFW  81'p՗zk D#;xófSos^ڔh[\n8z`BZKrw]ԣ Ǐ4ǕGW2r̷ J]12(0BSmj_.ZB1[1zO?x%j~^Xϡm;kGL7w-cߐ=C*R ?Zٗ:=\gB5SXjXm9tY #ż4UWk Mg{ ds6W5z$F32%͕;VTWVv] !ZUkA]|jEm!_5(ZK5Nf^WbеHh_(H:={1sВ)_+ y g7@i^΀2.\h<%Ы@Z;oyylQ7 synE0ן#Q|=]b]W/ m$Gt\&׆15!}uvԄK;nt1(Tܢ[L3_UV2LBch* GDƍyUMpo1 ~gj~iiJ>9>!x ~顀cIpYKDmG>Z \ :Fj tŗR]d@|Y E&cd NFC(ֶW6)'OHo~e d/IAnn'E <yO:Y[AKv-#jv?n70u@ G̓ KZN9HY렾;<-Qm/~u!x׼b19ڙ9s%sFra@P(F0'y)/Vz/߸ Z)VjQZ<0ؿV몖N47ۺjFRx5UJ*4_f1sU"Wo+0 a4i]=,Gg8½N)bX LzPlPNHRqršR8VCőףw_Ufomx-KVJě eY.w^~|H0DKZ>(ܛ0fрmр*S5AczEӂ3 ;IUZm3o4`sU9㎂LHA;h TۀߌG.f>/k{ӝ6Z|/T2\.g\BII$)P*g+B )|VB$ IӣL|eLUen_IU4, (.w K݂#U\ޯ?-53T'?ޱwoWu55 ƈy5>|% lY=1R쾻|w֡$0~ۣ˳j&(-AoWNF8 z}2B)5\ ?@LPZBxas1 ͶL %MB/n@p10JH1\XOC| U*hdMĐQ5属hJWb;wٕ\"8 RvϘ#[R"2Lt kzn⦃GK; D2 D12㫜Ͳ_<QyQM[VI@2Rߟ߭@F׋ csyTB9tvп'%v+s+рINvv- Swi1 3$^hΙ]*(?V5hoRt1.f9H#u"W%Z$Q$y勵t~v՞1$]\dݸ-뗛_[kXWg~_qXp44ȁgkQ@MVB9uv3CގL›K3տ7łx6к?~COڎWuEHϨ/G~6س^܇V9M-H $BeV]X.فwl#>K]"% LJv<]lbv/C٧Qo \YQ7RTi_^6:IQǒqGntL27. b !5М>U* gT9a}r\Ѣ@+7b5c< {G;#Rpφnn&e`nEɯ kӖrpcgeʨ̈́2T䇤Ɗ7fEhi` Ku_l!-i"8Nc^(H+/:hxn/s'-(_Y60$u ~T6SCR4~g[Q&zu!ؿVQ p-9yz :^]&nYz+[of 'q*7*ݱyum_}T`Tο W}QYW6薄W?< :bsw½Um}gY "@E_!o&2'GkgAQ}X/-uWk'Ʊ කB8XM Z]C;*`I%Z:w*CBX^tn) vbЇsC!Țmhȣ.h@0nRKٍSb 6]<5\rLпF0V8Ϳ?7"Yt|^Gb~(@iMwJ)>A bEꬋ55oqU\)fRGK$ڋmpB[,WF_Qb͍%[jkȽ_:ٓ){rM-Ƚ۲T*\k;s}n7DoGo1R!: n^g/csA{qM UrZQ^+epM6AT$}3JZ`|IBpN@ ykzT(G,4BRA"aö.ow~ڵt4iӠu\oצ=؜sW(rQ"/bāySks󂄦\JTݮZkk dj8Ji!Ƃ!N<@y'vh&Lpq4KO D՞T!u:OEU2t|G5miZmlۺBYLo fBX!]ХKe6w:#j-( U!EbBG>nce (sBmW48@O=;u?gUJmi  -eK HRz<- :"'h?ZYm_dJ#?c-a[7-F𣷧WR ɄVL&N:ػ"x&ETʒ/%%LBt(Ӳ^+]gUm_J}Y)RЍemXfQ`eRh@"kܛvṋ4ۚmF=6VzHϙ Ckϸk8:<8m+Jgq9C{dڪ# Gз(%.Z*γ9-sAuyn -)rO aJ~YS5}1ʉ@|{xx=<~K Z5znVԱj I gJ tZ=%Q%"P1aS1)#˿B`mWXĞA#Ld iWMJl5Iͦ([ XzWC|VLP(00x >[¼l]A?ׁdHfmXcW83PLĀ*_yIM$_$Y[uzP Ђ:[8uzgc훲 _QR0c$duh$MNsRhl aH 컫ߐ>$0MɃ/鶗{8pܺ0r"tBgvkyD6BǍ< $9h4rBRw'28I).m=rםh BJ#k.e>Eh,ZTZ-|LNH{iACWhn$iޤkk-YH096=X a<8.N ף@uC<9 *)Ft"xi2ӛtmQSH0c{E, ʞ}Ag71'[' bIIJj~}1^ ͜RJK) +ùp>i**)ɴ(!υWв+B3]H3=cTi)F58Cd 5  ױox&CIhw@nxJ SLܠ7zy:8!& ӌRǞ4pv4 VxuTH 0icʲ|Q全$*xٓffKG-,(dQ{$;ŬCU&zLdFM6t,U pͦ 4:0XsB#(Ś)LB0K BqfȺ/1,X €̴3d, C`-`mv3%4iČqޘ)9 Z˳hgg_n(*e[ ec\$oD@0D!fj&zO.gq:W!$֟!|Ml>no.-k5v6Iyh'a(%Nvv};{llv}r|Lhe\>=:=72rS4ۛ,'TWrW˛>D6A{R`{ҐMꛣ;,$I=ƚ?K3C+b VRq,Fko r%Jߺ'/1ܞ#'9k慑<349^\\|- ň R=АȮh#!b( ogaٱoZzp1ӫe!rL8ƢoNjljo|c僀n)0&\eU3Yfi(yY{f*g5 G;`^yo44Rb`@(c;%Y%L&HǴKqEXZwPJF{K›`,E1a)d )iƜ1ѵRw? M5C1>E/x {Uk)OIj/M2xɂgT9T N]IZӲ[m:IC}mzV< 4z+o T52ep:P:& _tr{@6Z,M @*n $+#*ɑjN]+=%TNy(h@r̗"$ǬvLR12K*HtXmL=)CJNS3۳GMzQ*i0M,[~:N{z7l{ 6;} 9JѻRJ57 j"D͐5 m(-ͨo:fs&(F=9u~Im" =vo!t,DkGroBImBPe4f(٥'^:jҮyf35k䁣G@OͧuAF t]_Pŭ _~xl$W'k /۫]H "Rﷳ83`WZf-^6ޟ~\.*G5?{~ܞB*ɟ.J\sS/}{ԯG N=EYt|E K(򇌗Q$A8=@2'%URq\zBIN::f.Ϊ$X3"𐜈ȼc;~;:ȸ[ŽdKk|(˜1^\}PZ7luzK9]؛D!/0?zx4 }:{`LRٺFK6Wu<4J"Wd-hȖ-iy y1˓s~$k~պ6rP-6+t浟c^oF{5K6x(yV77 ru+iX.8W}6PDEPCvYGɵY|^ fkC}g,ګE]_?t?'9oYrHBq^4hCzȆ 86@VjNIz5~wm>\M?g7fQL^\ٛOvz|>qNjWg*%Gn+f2)2L^NhplݍO9AϐЋ¼cJA3@ެ?Leyc2ӑLqnj5;\^`2m_M4ʒ)5 eLNI'0n4 1Y}L}GIhXw%Kxq=R x,%pUidQr+θ&5#UGG]p˟(|%4k H{9\">ox+[-y{+kUh&5x>h W72Wc\\dCvv?S9iqWL5dT!j̅.I))TJQ4Z28@4 AKռp4pl *0Q+p3ccطCj"HFi@"sMByܵ $Qk)*@qL)ۤ慷 j }lUI`W+mH»]^Ƭ=[PwY$Ɛ\4`>R1Kc0*o2T"X?"jH (FD7ۢVX٘*hvT[E $U.@ D*@7VȔh"zjS@cZt#RZ_Eҙ)͞{Zvw==&`{I>Dj#$hcϐN9@kPwou76t/~.3/o^{Dn$EcR%YTr ,eH^H{ \Ǝf$tਔh{FnQ)x7:0CcH($T7:ލΌ4jl]v&/;馕]?˱5]^hgy43귛}SV`ՍjtLͻ<;`֋QT]Ȟ@_eςڑExw{ #M׷=߰{$~_91lwQϥzBr3= JFb3MMydP1xRICuOB15,P5{}DDłGx2,A]ha_ 6`g7ǀCޣ9C5sAlVZ涽_^h1D2Pt熔|~ov!fy|8 z<򯶡+2a@^5R0IV\衚ۛMM=a{l2_[KduFo%OF?L> nb>-`u*wEdո' t9H {g71"M"j[5@7TiL5:ygà ϰ)ܶGXrkl՟]-t`=hWHt4h++3GVE?!.DD G&yc{JfrX^J"8ay|OѻQ܌)zʶ5威PfӞC/-4dG&mtwLO1=41mH}EEZAB^,(Q*)1zf4$"Ӳ8mEF}_P:p 14%dO[B/Im;y@Lh&WNzGDl(N 5De"&s)HuIB^)k߁6 FpisqlltwZgJqɔ<^`T TC[Ԁ_Ӵm;+9f$eאan䆠9#yO EcabdbiUWo.HfU'2"i͠9k^8[!ϗm;(aцw| B=|Mg(zhe߮A/* @oZJ7(HVw (CAe+|Eۖv6%dPL8o@a!1S=(=6\Jؐ=o?{6*S,$T'[+K,Hr2٩)6+MʓI_ٌ\%'&և ﷨ˇ Q%quV;9)ed1bRݖcזؖ3_{;v9m , XJ3*z?QbgX%dСgFu{ԛYLjD(:NW}UOs "X+mq+ߥ]k:柼<9_TrUowGK@uMgI@5ގUs$ vuy߾z>!q#'Ebߠ7 f13DIo,:+f0V0:I^H*_o&B]yYN|iV^`PNgzo&Mk5+Rv4W=Gکo}WQȋ8zQ%/k.B.~0sO'Rߝw;gD}bom oҋ?{w>N!='Y.;F'=)ߡn /}X>ߩf'= Vɏ܏q<28 q7R:2o҉GDÍ'Zw?{/~-Qc~0ڗ{^53v$瓻Ο܏ ޣ?Y#'e.Ϝg!a?:73dR+xcΰAͷOyGԧ}r~n7lUd?p=nrKfbΌh0%,_g7Vsi勓cyR4}b8=)ھ/Z;r1Zfz0jcw=*[`{/^ݱ2K\A{W&[lG&e;2)ۑIَSەB0,B:>c}DG\Rk*)BT":RFO)W;W&+Pde?XϿs-= m; GfIsڈ&̥Ȕ;C'ijz ",:L 1A.߽^نIsƷ%f;F.Ұ)55ZrB Fȓ@_iDHaN"'t=нsV{2 \{K9uEnAEnm!|Bp!ЂYUeevhEQ&gaQ 4vzzr~V!LuRզzZ͓,[YZK~v$s3NjGY\tLo*}#⿌~.gu~{ ^/ãlڭa)n\ "MmLtQ3 EW` jx|Πu*1McZWM]'6G^Q0hRƍBUoÙfjIVW (xSo_ևݾ=XD,vk0 :El8]:aU&,D֫7ο.uH[ʘL9?m5k /c1e{ un0v´^ߦT[5kHS_^#h5m+3̏ٽ] gr-l.ZIl ^na۱ 3x*ˁZ{nvshPb\ez2M?ê8ccp;RIqmƳ$V.n^cZ&GPze&. s ๛mKf@r\tۗMt z)heuk8|%H ϒ ΉHWgb s<[v[wO7{lY.XPԘjf474OAeF5Ak3>m ^4;&4yٰd 7AK¥*g 7 /;N T`ޕ?U()5h Zv/NUg0 %RI:vmzLx |yu!jvxVgFemlImw`,*OIPO祎iNm۲xf^Q[}37)R]Ztdwz `gw0 .EQ'U)\ >Uڡ!%pUi/uiZu) C _(D!s\M=)H䘊Aw"ʑ4b^D?hQbib6KӃB !r)Q(r4w1r!=2 <++ҡ41=8 az ENT:;u zDPs_!/MGO .Ц G. ] dJS.u" 0PfēJ q( ʔCQ!煾y@<B/MI<I #>͈p)7ruYH>IS GiC?Yi1ˆ £/ }l5`"@Ҵ1$Q=SbDOxrO%%2?Hr\RDCZ"$"&!g"R*h$tK'hĐpT KLmL> W1r$#/TF(=̘πSwTp 3S MԾy}*X#H1H K@< P*KH#2M w0KOpƺtU Xĸfp,d}hScLzCm t}X:c)B3D4w(~0ٜ+r/if7tPs|IAԸ^ Ōqf"Ea5"<-Mf>=aa% )F$O qx0vAO j5]8&k5uZUe6 8ՌQDxJp !.Bc|3V`H,KR1}x 0{J'@ =aD 6c3ezyM+OB%Hn161cհ#P@B e.$']@LR/iG8^Pb&|1o`ñw5@BHY҄&2/%IBS6䈹<jq$R jŷ6 `&42{V8B>qc/^xS?mCgA+}yjjylO'x;jf gc w7^KG~j!o}:dZ[}8ɼRz׎'b[;NwjU'+UrOx|IU<ADOfɭ00 ^9"5w|Ÿ G6xbꦭ.ܙ"憭w۟3/V}Gحt|[[$K}nײrzjE^cSXdK=U U^uXkqԠALF[^uj.3jdeϳyIm˝9'{|8EC]r2%FfjwbU>Eh 8cr3|x>  C>wjw' *DOy[/Z%6Mlc4.r+MΡ4s"2c,)ȫViV8;Z;UA}H$8zH_D9B9U"* sS俳zkUl_Qfښ6_AlNzW!k{Rl}9JL7/?=/ EEUĦ(`MOr۱hX&U m,zW,M)ŋo!Z||U۴!tuyX3>;檅ִW=v@k@.L|(䭐oNͤm[Z^ϊq)BW c: c:.7kψA"UQѧKf C،̐QB aX-> V|6DYq1΋ĻSbGSM1qЖAl$Vvcy{D% iM!ԘD%"^%>W ^ªHWIљTwvc07H)8_2l|9=w3WWWt/zPNX H| xm81pbdc>ViV4Ib7Ky |sh(aytEA;Y^_ϟ=Dy ,*qu<?GK3vۉ^h7yS2ɂSfjzzFiAMIM8FјŠqQLh]le3Wf,v3 s4!XH|إVD^^6h$fRPHb #5¶ sѶ֭d TY[+3S0Q39IA]"aJ[ďщN)ql],#u"h谚Z٥ю h"γ,0$;fa?>]D>a;t.! V^ c( 9')R>3Ke,%H G0_-O#e88DY)8w K{Bz,9 UlFrHEY"f#^;3jr!^@N$@$Fj)A}1y(6jGj\Q1ժ50 'O U;QI #YZ&HL 0H*DB9˭{NaVM؀s*jEZwJT^!P tQ:qN RX`4E$K1RX5:ew'I$4_oCvy@=qGPvHR|1ђfΊD33in3r^&HP&q`$V0w4䕐,Spn3ð:12M3oitjYC(|H=gZ"9v:%ڠiSƭ&fIB͕HM$JpjR1'5I.Eq;Kub[UL}1~b9.@ ə/ yr &<]HKƊX#_+CDǛA%~WoXȃ=`{5v4^S<\P)7n,X9x {$`^T\]DL&(O֡yx1DRljQX18>y%,X{xXs#Dw$u!KX#s ~T]+z7ؽB#y{uDq/ӻ0k J#J;rB}9uEo$$`YEow=,`BòGiNޫ di gHp:H!X#{TaF: WG+z@Q@X*jk k<(#}EO\|VZ0JDx~wO5ZJT; z VhXL p+P9LW/,7/3Hf|K+R^6Q,diۧRoӶO6mU *z >ͩ>SmpU"#Z9?NA\ ّ<*$|2/?O\rE2:K}31YQW\HI%NvTuJ0mX2C|t<\]+!n]ɷSrV*'']]d׹MWNn~ |a_6Nv8VE;O.gU-姗+(ͥsټђ)f3g!δrKt-tJ-k7{sIv89-K/_&ߪ]QLmq7l^,30[VhY VwdW_F7I/~'shEZ5;B mr˸ WET2R~or{w$A2MD̕2xf_>Ws+n6{Q~y#!tٔ!gu:e ^Θ BZ04VWXGPvC=۾XL4 >dB{mpѡNb/D^-)Ο8Q&H*k&W&g?^-5z__Ͽ^OZIE ƫIjNĶbJ/v/SFn>G+m.vy8ێ<7# {d yoY^ w+=lb`2`Jn!feE.#^o[K˴~υlY8lIB"$SD>xK}WqIv A }Gv> 0=j [ {wB脾v;g׽ig[=eJ HHxh HDkP2Y[ڱj=j J4+NF ŋQxuz3#]t,_^]G~ջ$ fW?.I#XvUyFvb!UUb󁪹Z^m%-}lzY8*력$V'6WڕoeUXQP8~T0FZչYV9J%8+_mdHpTTG=jV^;8jM4ԠD(RZ cZVϳijEgZg ]UeU^^54i9iV/iV Gf"L 4RHs~Fv{k&t`UIʍd:OhxO&,fvS0l7`3';uնWiYp;cߍvEqFf})X,דqVat5(,C11NuP\4ɬ=J:J \I8 oF 'ZDB3  );7$-)3Ғ6eG(ԻGҜ ߹I;;܈ J-ϥZ),՝q0JsmyٕW9##$iXt *[Pr5ӆ`jά1ˆG͙/pon=d ˼ 8xYd@K̍2:}BgR X7.N3[^>Md|gPS<ܸx<גP9\HKzLTY{3=- bޭUf+sØb `U~|j];@d8UO!d:7 Ќ)DJNN5GCIT.``翳}{YݏˢNI/Wd2+dLS.DRq1Gޱ.mpKU>_ArͪCl0SAä&V~ D⻅Uw!KWY {L^v*TWQKSÀL+ʼn!*T*i#XHny /HH Av=J&~К M$EuHcrt=[fa"(%թRPjսdXZXsZ:'W[c9F~=142.I4˜c o_N۫xK36ݪO^Kc}$F]xo D/@}%{)mі~ *)˴'9ePu@Ou?t5QI|x;=冧 #[lbӟ59q ]\2i$ "Hw; ҧ4uc_)9= ]˷Ws8?\I,;)i+Y͓Z0e:g.D(FKu]&ٱk|a _@} {N'Ηr엜/Y~:}c$҆aD$1,1,VZdh)h;:-s* _GwƬ}%Ҍ`Xe,w)0)M|Yh|kx%,=h:)R#k cK!å6~X("LE#Ko/!<`s&RMA3 8^- W˻1jfyFK tX&y,U\&N^>j1Qwj  5t8s%)hXO`i.SuhB\s>e(:PPɐ)4䥒&=#VJ'D|!H? L?Oi%uҗ;2VIkNLD;)[a~,Bl>8#!"IL2cƱa8T~#X lG|Wa'_OSkPO n9\@j=$l=⌥s:̰D.fVI-𿄸$AwAti-d'M0wO_sNrYjo"}0L8"=֜6j1:H?Mf7Z c>zB`7q)w)ç-Or2(XY|Ȥ`k R* oל6wXv@B!HobjIn}L2<RpxLJUR|q@mzhSDuQ@Tg{HWcuwb~}d]6n۱ !ݯzl8{8D6?[)S`EqQYL_}|ǫP8e 6(;в5эhA}4cD01>V1j;Ps٪[8R2SJ*ȿ:ƥZmyx @4Ŋfѐ1QfiF/F̄=-^ҌhQݘcE@kJ#1$S)\ )q2Yluk#T{|Oe[^̆#OLpܙ$LM@(N# ژytPr.<"f X#ѐ:5:Tb7!̥B2V6Uv0DD} orCgmUd*F&)R&զ!j,wb IHNxhAe:,UE>^y+*B{;SpYGn ]Be[( TӻB,%dps=K(XNqo eoڃCcmg|!s bjnHi6{E\7;y}V'M#V%f3-Z;3(l>owSB6f3py_<2nڙ1HV`T҅U("ƙ]ƌW a.w}e15T.l3q>qJw22gtsj{Kduˈ6o1Sj^^yYA &m,m6߭`_v|y: qFgQM|]?@H> 5ݛGa x/49=z79Y^8[$̺n~sFwojV}7AdEF"$N F&DubGh =i}95Q]Σ,?n*e]GBSgS)&0MVgsRɕyx7˲4%W4<\m2*+0YSz^\ƾ5gg70kyaXvͮnhvՔ!IэpN xɒ+ UR=bo/{.Co<> X)7y;/v;!h¼QdLYnF \c\;;ǟNY Ӂw4A]˞EJT;f0xX:L(4%V0*"Q` u;WoMP_QZ9nI K㿗3HN8cZ;n.{5%M%3[zQ:3iόh*Ƴ+{U!tKM:Z!kw$Կ#ꎀpI*hRTl/s2=hMH.;=Sp?8m`Z\݅I>ER?@k6!d,s$ҮP1**/yTdvM-YNḧ́.;c|[7_A&Nഁc߶9~4LyaQ45{2v\\rp qQCI)Ao-rmtbo$jrɴdb'ח-7y6nbb%mvB`WpvRY(43>ku|Wq,kA۽d'vױ廼W$-KLYğ\dpp{򽄂eJw>;>b(qY̺誊mk9}4{;-;\Ϲ^J`PEk)۴LZl%XVrV#B_Qː¹ ̒8xh)ɥe @0 m+x-4"`HˬYD`2F -QaU&kEYIU kdieL*!'bJ9jGiL "P;_àh^!/8(C}@ei(GàeFjVE૛#.(i9FT׿̳wfٳdٳdٳdٳe/ު{g)Joe:!19\ v'U)"'8z-P!GbhT@! hAdxKNf)_i-amRT>KQ,EbT wƱ+qWSj `K#+J(SVRK _ SDkPNB ;LM<< %nf:|;n0u0xT<{1#ԥ ,20x%!z_Jz!5NE<D#QLϧ\IwnG?:M߰?\O)sK{5Z@m/l *3EQ 1ݿ&,9]2~yzۿ?v{uf!ke, 6dtfǶB %ds?{ ]}ww145ϻK^OL{hdg/^^6E۽Nw73ߜ?{3LJ99?|zo_g??y/_hL4QXܯЋG'?y0Lɨ=t>\ty0cts3_Eu}jes]9 snG/>S~'ͷaB~S3E=o? I'qŃK]0~EwSNsqa5.$>vη!Sl{kx\{ى}rݐ_.o˺Ӊ:vݎ~_aȇQH!8/_|q t|P^?y=ow`_^e ;n8 ~2_vzWË޳o0#o^{y'inognM#gY1oa _8*u;S> A{4!;=7~kO| {5F"pBPjqr}b0.:P0sޗ26]KVqNV,͘lZjRe`/e{Z /8Vs˺~Ȫx k| ۡS'={2IYWg5z jƛ<;=鏝i:ϛ΋돝;cgEK~{cV_ _,5H{ &#Ic#6c")7evɑ4* d0Ȉh`.v}(XSzk7RU.˒v))-"vɟd_]Ih^Ib1}DVcBDŽk„U :&tLp??kb?\oz-Xk076_ =ƂhGm>덇}\PrD(&8#dCdPOƾq0)PZ";E^EmN";Ev<&El 0at~u{< ׊l$oѧ\&Tx[FL(j%RU gK `ꀵK2&@R&MC? H8xᅹ~DӟO<= EQT"eǖ $*Z%ApxvrT:uu& ]N]H]eXcLc^'guGHHH:U< |mABg/^ᯧH1c@{c@HaH<<^yJпiG٣iTЉ6N%krlrlrlr؃T %Fq͸h$ZrbmL]lM -`Ёf)V yz*:,z=9n gLbzO -KpƼ|\s\iǰV8 ,ǫۛ-5A\o6wOP=iv5m:^l qd&:tVa髟TimT[*4~ԛ^$# qX3|l&]lR(W(] cFŧRI<ίq,܉٧C /j['P8Nl5ssF K-6.-& 9d-XUj tnzl /sX[Ƕ IlK3f1P!ڔt@L+2ın0Ch2.d;'(Ssj5CExXaqJJcz]z9 1}cec+|kN7aX0hH9q3R#M"}X}x4tT= A,!$țg~'M˄) n%!EtC׮Xh*” {u@66@bBrn,pJ#ɦX /cm׺g=G.:IaLcuXON<`tu+\ou֏UÃHVo34-[XP6Z?niãUʧyd+m6#/cC8NU?S~{X=W[g|wŚ8N(K.>Dn~n Zẉ;J@ZzfZE<.zpU&k%gz휓nd{xBƒ% ^8ӹʼn8705[' s$/ї;1;y;}0I>,\ šiswn ZZ.ӂO(XKI z.҂gWnŰ-pC΍ vnӂGa@/=-xfi; kM/8-8뙽J v?j954/3V7z񖌨> GSNX{K +@L;./YZ=~g؎o (zreK@Z;$yZs:7W[ ҎȀCRubmӟ6CщȻg(`<kg(+2uxj6:^̋vk0߻}on?o~~uʼr)s֯Oބy/BWӷ^.U뷭K6{9fbKl~)S*^%XlChM]]>CT)PY8~ O{&^SW}{3w-qۢ r(jGZN1љ\ }0YRle0>ENZ1fuByrlgbWAXs*5-(Y1$XrBB #8{΂sq q?ҟq;vx;dNR8tt;7SO{tJIMalv97y}D<t%lW:gAT[ g ՏS<0> Yhl:t&y>}:ts<';oV1yJ,í޴~Z630#LCkr ~)c6{t\~bo!hVƮ|XiIh=*F,L p9ǛmcUfUWϥJ֑"ohʦ7 VR@Al"dEY\S118^bӠ%QP>;ѕ}061CZqy#%t Zr㥶G{"J`.(kmz?  F29f+bŘ,LVkx5, h)(ʧ?{SPNAHA[e.%L(ç)IA ` 4!9S^VR>ʯuu vv,K!T#q=3h=/]Vک^,ZUI/W@o?vB8k=%#\_q <7\ zQ]FUVW԰~Zmnm8?'(b<|<}61r,yAΖ4KD72;W-fPeHL&D2L8b'9[ΑC"ϲDKO\S%q=-=-=rD˶Ep/ۻ/;3-Kۀjbl ؁CN䢜02@+()zo&n`hrh8uJ;{}n-K1ob'5A+bR! E8P܅Ʀ3ngܫ*w3ngθqǸd AClٜGv\۹|k;v\۹x\OvfwRআTOwrm6;(I`?ȑi vu ` q;_{"ª+g]?ʵO,MTnH תn9 M"4[)u0%fIŴl!U_M#󨀹tL{MLVWδi;vL{<}Nv^j/ëv#m#j H-N{ vov3EMx|cћ?2.ٳy꙾obI5]I!U(b˹k)%<Eׂ԰ D&0 d, I9ӥCmkZU~ߡCmjeyEy?O|݉K 5AlZ]_cP2fr>!1ʵWPt׻=2Q޹l KGuk;3tAt훯+:gm @)'/x-9Yj"*<5$F2@4$_tt{Uǐ_{uttQSegw*>6? RK> }@.~uYmv{@ {(@S_ H|A1?m|MjT8XS)N k!%fJ;K]e=%-58:]U¯S 3Pg@cM3~mى@!xkhk V[)"G5)Ćv؀9Z?;kmgRɭO&?ybWw㺴”&b(B59 ;#`ȖmQr. 2hyZjpz袇.*t>롋衋衋.6 .'0Q@2zsg"zúÂb> ІhB$V#kt;=c(@S'Ə#R~-bD1yY$"K5(6Kc_rHBƢ2@g@LN?D?}rON?~G?$@.4߅;ѧ"5g ]Խ!ː:[ӡg3E`u=m+H%SkJt>zW쫗jx}FѪ:h ҙR 5P'N@:6-Bo Ç;17R`И0@JL1Hrӫ$y׀AhBSTa!G7~.>m: 9Krr4]B@$9`Q M[tbcZ{:_avw&Wۏn$ dm$O[,)'4eu)]"J &t9}V+ߧڌk4Ѓ4Љ0?h ӯZVU+p&_A w%t1aϥy8>xs/:1AjlQMa {TAGb|ܯ5!0#`LFG}_-ڨűaC謧,)'mӍ1f&Z.b1<Qr<=ltЃzQ`([ĵa·"Ș\X5dבj4 4l)G4?BR)ADɇbc ~b=)퟽aoPn6wVlFRE!mS֫ s%{m:Ku=Qhr 3'ݍiP?:wFgQ}rU󦓉|t$q/=ɀ)D=|o}م"rDzP_WTo>:KOAzmYz-,nC(hWXFuR*x]0l*fJʦϋO9yIx-w|JC}z-?}wOvw+]$ٗ<}6]ŃۻP$ 3O6͕W9/.r,qeJH۬BuKN1!ޜtP>1amy<9;6Px;@>?b$d RTO t!S㪋h+(%Кu*Ϯj`W<#>R?:'; Y-Ru?fٛRRUO>-l;g/Mxl?l'n`9M4ĽuðcxIkbHqS^5?}(m3vqK]є经vjOIJ`[dg *'C&]QF4nںwE\k 7ヘgn/ōP았9XWW8PŷAq>%I/:ߴ*"17YuL5x7@;t4d](!VtSc=.~;wo{/tZ^A>= v/ۣ158Nys2Av=1 3~=8{wgfA(7 3?oG÷z}x(#[gGvwӳY9Fz34w^?XCAWݝxzͶZX~p0Ja)'2mysx]c!J%F A60l/c^8Ơ4^o^/8u,efүKn^=HQ21\s@5"(Q 9kɫ{G0g]x;\\-H · ۦK 绰F%5:RҒ|Iڲ. oBԣMGplj1tf;\E%'K+ Zyav9䫍J,ȕfKJ$PwuätxI.I GXQ !Kcwƴ&~KrYVc, DOH4 5X΁a&ɀ6&&2t}^c$E- ɕ1:(rj+qd#iH''[(☝qۑher%!1\*Yգ]C},KJ|SguT+-ʕv-;xJŠa ̓!!@2J1DYׁQRizP^8CPdҰpؠk, 1~+aKrcԮe{Š1DA <xf} Jrlou7TlX2_A7 Jo%@ҠpؠB :(wq*{SeK@+Yㅊg؅+k;4VU'oL'M,Pcˠ ӱ)I%z6H˳ I MV/W=1Md̾| 3f7V҉@ʸE=fkjp odLf ittސXe^t蠄rFƚ-e7ϙڊEd ˜dR퉵C0vD[,ZKdE ?DQz܋zt%tؠoK3ZW5foM$,#xhRMȦiojM\slSU]sUp|{WvyS%*ru Bb{I՝I agGG?W3xHꇌ\v $MPK.^_uA)}n$'7q["mgNRzbI1Pr(r.^Qbs]Ő';K% Alw;^"+s6x QID.*D϶qn?⬧o ]4(0d0 Z-v/O¶`KFN{%־-r]mu7OhQ%ᮄy#:ܿek7nW.Ķd=jiPJ,۶;M[%2ǽl82OϠnfC)tXZ}BIZUה(*$Xu{_rx!w"|V{;$uWQh=JÆu(61km1ۚja f[>$yb`xϐ#Hc$Sӱ+h9-Tʡ*؅9F5;!#mt}2o`b2bJZZY27RƤB3$XGn5mY?̝" O0Ľ}{:G ד>>~lyC}p/d‡ N&|8x;c(cLiXBFҟXU\.Ms5/O@ݞO 7#IC HO㵊"l !4@Rqb䜵ALb,VX)ڢJ2׵oIW{e7GwGpdjt:J^R5j2_xסF@sRs$*& *ɛ؏;)T ZRK/o9KJЬwoaU`+V_s 3LjF$;EK|_bjƔ,IQM,+q9Io2y @q;Jr"`_=yUܫ$b@<|ն:.Z_yOՒƴky!`:[(CTƱsAVpFY+f3 4OcI]k1|ʯ7N)M!4k~Csȧ2"tWkq_̇&lCf1TLߺ {X-[]p{3%b,GG^8l".O4$%]!Lo0EAt°4D: DmzYL#'v`m:,\krttAW#^Zrx0^h)ZWYvQ'ih9ZvO &Yycqu`(V /V c} 1SVPI3@&Z-emPh&++{kilPvOt vgȥToN azL ɀ LIÐod2%s$ /sSDh9P|ntY-S@hdE%CIN<5w1%Pg1GD!x#s$gY7 zzI;0fl-F_H1<R|\0`J7KW=S!@ls\V/5#/KVH4{P~!5nt1͖,Nx"rGQeKM]9(IH"(L8<M3TsLQZ~re= %f1Zs /J{P|}偏][cPYEaI?[9t7؜^~)|-.j}I:B2(=x 04_fOd@}B_iGѪ+?rqvGn_ f18VOfW+e_w%)v 7Ƌ2}mD)p˷?}1{gP\uC%0Jg~^Όb+VW}ye@YWh6=¸ \F5(NM97Iz7+e\dnͶ,kR|#i@q&~-86g#m WIidNn~:F]}9wï>Øyzxwٟ( %ggٮ9L{ ҕ)=<|CR5#)RFGŻ?? !b!]4OV"- -SKƲ8_?/m5f9c?뇳3ۓ]ŏj@pnL2)]OӇ;282d=؋'ۼZ9 X.6 vie[Wnvۧ?xKV֭l:Go;sJ$9=ENsH嘕I| 搌 J'!S~сDTһV1ɬ~.~ 뢣+pouǵOi|^%f*oi ±;k.9#Z[QMb|WmnE$o,AAeTfXџ,2EWFvԻ$/SgDRLA\'<ň,U.ܱ$V(7ߟdD>W=/|={M6(@ ~OӉ0\q+rc'(T&hb(jd\n>ݿJ߇ Ҟ߹")AvLj,@Xk/\s`?Y|ab`bss8<" ΀r-<~r'(2yLTqOa?ڀmܓAY-¼8sɘŢnޘc;߀LDihD~PDSvi?t}=qWwVE'iEl.㘳^|򐵏' ;e _Bdq030.a=)< ͞o[P`C'Kn I_ۃta<3'JKI5dIO7l4 2K6ä3(=G8 T뷴{@UgoI7܈}m#rXn.8(W 8!g,RV&EIro]nl^Gm^:CrUA8~J1?p?!H2{1nAy6G&KŘӯ![v)+0,_+y>i^~o@~̏^ I4 T:N&0,KMo.0"ֆ0yW nAT[5=-/dh~Pъ^)uY, mםfOh#6眈>l[Usk"#iE5o>.^ܒ]\nC+.uᖤ @!V7〘%{Ė4#s\`y"RrmUXUBV:|+? D1޻'}0oV{Y<"z_dGD11~e»0v]nA*#Fd˽rbw1 Fv{n c#c9P֮q_x[~L@1{T M:vO`e0~¤PGEӰ_4ܵ\<.KETѓ{ZqToOa u`/O^fۜ=X/wS>9o\ ٯsV5pѽ!NQ/R]#{OnC?Γ1 ֗FO9CW4O.51HV `?`}xڭȿ[Өᄏ_n{^ e9Fͧw6G*\T{ (7 VB1soHVZ y&djfe7 pw{CNI+a@[d&ORU&,4$ ?s\fIy]RWҵۮ@l]ty4L-j!t([>VvlI67[ۦj)Yv+Bi @w*J+T(oa_"!j[߫ft:\hP=vv]@nͅ72&:ҙ"rXDLd %In3 ra;D!,dK/Nx` ÃdHqd2ZV$d C[]BC$#O|#nvNUD@`.g`%t!p%ocX3rdr$rR 9.<(L(̜I&Vw72u݈(~v31({q}ãIE:ny䅢NbfP;F 7~oS4XZ^R3hCfLNjB881W4;2eb aCPPw`yNkQ~Z\Xy忦8ʽ\ĭ[qݡגWJZ} *K쾐I) 7cNoU&,*=&#a]k+8/W 4j<fP3eBҶzz 4s+ &2BZD'1R3sS[I B7aB)i~a{!V*gs֡4jilP Œ qRVr3|`8 ;K %e!P|: ˔5mb7 [ь\WoB,B9@;yZҊ o|Nn-h^&w ].J(!4@*izߵ,ϑ?dρSq:%τܒLFVKg"U~ XrC֮ur0Luhe.E$H{1* :gxͶ},@ycI~aTL)ˮ+nq..$3aQ*Ok檺DaA φvQL50i 0yrp_0s?O.NL_C|L ΋}OYLVM)0µW3' yZvWfP'g5l@IգOj-F .B<}̶v?vʺc`Ac|/em Su#9W}f8Ip\kۮѡi-2,)BRѹ$L7͒8%yL ȥ&E@gf%Co P,A>;Q.K HI=RV?a@uq+hYoS`p4@\dȕE^xW4fU /SZy{׼sM 9@ރfj&ka8^Xc:>ڂ ZrsP?{W۶އ|ՠy5(ౌH9qo(Y()-Zovggfggqm ̥B m?2?󟊍uBVYNޜy%o;Slܜ❂eHЊ9*w=ZR8n 25M'~T\D1\o+3]Wq4w0t 1εqV?Pm.{$HrWB-L0" j>?XWυ}zE'R>J\=~щ|ȗgv5`'m72F 9TϬ`Y+$ĦQ@X*R8'9gԊJu}hz6b.ɓ̔HTȟy!iNtDK]tZ5^rDQ(#q>1m{.R[gj8F }nӬC]+j5.7 ($I0im0ru6T-Jڍ(عZ%)x"v K٠r<٠vj96`)$P)ǖR+y 4W"H~M}N)6!(PyEҕ\@ &pNBcUЦtUoa+D6YkYp%'!)K I/Q?0ÿwXO.'`c.E/J|ϯrE<ц -W"";kꘪyX Ü=j.ӒM5\go9Ig*I9qr$L{Y8gݝhc懓笴P[ 6C+\^B2CgjH*v!Ґ.м: V2 TiBpʑI:ډ>ťܯ$`Ҹ^#BHqiєϣK16AK |\aM7:({pK2f%W <'\)^֎#vl%"~$ׄVmA1Gq0qpYikwvjOG{؎,M [!QYyC4]qr_`UWE?`dl5Ҍ}_wc2jp Ҝ!=jHz&"Q9}֧ݗk"ǖDGaNq @TeQ/0l'7Έ(}xF3d["Mf< "tI2Ǡ;d dL[Q*3S_lAœR_L\&:tw^ Y&Hyc#fb](2QCaR$09Tp7"p&"\ Z"hY9GktıfZ [K*$U*KZŃ* &qByxT03z"d;+^TTQYMGz.^%vzP3ok?\~,'298PU7>S!`I.ZWl;}}fa'9zk g *JrTUѪD>E)"ʯ"3J>Yb-+w0tkepZT$ij~Jnk:/BGOA4C!zMV!u79BGzp՚VbޓU9Ӫ ?q#΅.1h6RaǠq°YQ1YuES1c y <ΰC3Q8.ַƿ;=8CP=;6"́;D3Eas9 r1dk0 *][C$)>a۶!ܦh<ˏ%oA e xpL2[w ^!<zwq0潤^}%%J^A񮣔7E#(SP5uw`QppSWa}VMYضTMNj~k~ #90|xaZ6waB >8q9ca0,Oy(HO,)-T=IR6\c*@bs1i3?Xjݽy򴢼0ԕk0g[rCe.-Ľ6n k{=Df($I ץG՚@#fO-͊JWk/BQ{+ Qy;)t#́b:r̾@%"jg!V>jQ~`+|ES֚c^kHq%9yj oUٸu*٘r3Rj%?jLJ)Pz?fIgjĽfX]РS`&~aM/iܾ.oihJm֤eSDcЄ r~09ig'+Kģpb™}ἀQhwl{*N?;:roR[Dav /vGzI?J0r叆CxL9 [`lk _ZB0V<&XD(ɨdmeuo$NV>}A/ύۍ;lv"9Vb͔ gqq믿O?_O/a:{s{5܅o'ҶݻWy?ȴ.#3QuN2>{o\//_s^=wy "Qqk: ķn̟bK};_tܨk;޸m\/d.o3 < :74}MڭgWO/F^z{|5~mSA+- p>y>` Jf6q.H9Q 25M),kf̲Ϳ+q;_A 乙 hWQQ˹U:zHA7VE/TI}%Zn:Wf؍ziM@Lu3?s`Kq;_i &6(Iczakt;p%H_Wσ_/֜kg1x.ꍾXzqF[|׶W܀!?;YbN z3%@\I>gɀ| aӋ1L//Y#c+5mݤLVhqcZBfOgT6d+kG`^u?+!vrä97fnPY'k5P߂>lƌt'_?ڄƩD1vĠOlF% j?3է13?% ΖE>yss(+9gH"C Vm}[JS!-RLBL2LOl4CIt4%!V(6 F]I 7iw}TJ6fRs~ljB 0o]8O}{ނ9"Œun4趷̲ey|N+/&St&l>M^_q~oAgՐk|>K# X<,=Gq ̍KdO_'ymmȶ 'o \y{pK+TK>k R|%Ȱ0z o2{]ҌL wԾm5ϚZuhe&{/fj{]a9g=&/Q^Gq҄9w-m͞rwt/=)ѸV\W׸%st677fl<œtaS)y<I%e>d'8R;a~:Bii"sA5>-{E 4g3sYpA+7~24y1%ٗ)Hp/x|-mO˚١  '_g^Ì={3nx{}DP٧%A7`j^&j1#мp_l>LCr C o0VJs+ xm`̤n#[RK3:6U8cú n蚖Io77;}d0B (P3.dMȺt KѹS Fa$l?M%?ws '+I!jxsvt's~mr2] c1rYv1&:xw)& r8DUw3oERF1(-F<ׄO2·[t0 N؃촷Ʉkp y e#vQ_g7ٸcZI$QD Dmő>P  16`Mcr Ke6?>iĝgu-}?r|;W1^{esb,χ] Dv~,T1u׹|³( j(ǠfEY!MexG:_z/z|2(nYV<-UO7 jJ֋Gf*՘@ b7(6fhZ*YW c< _C] kBӷm'1u M[~Z{ؿ}`<)ʑ^MZ aT̋r7ecFzpglo٫}U˿n,>fz= fσFԃGHX%h'?@Ӎly& =lViQ ~y*QX]t lt9vՓ [YD8 D K#2ZR!)womE@}<}wTRޤ+LwW3PY"t09|3[K?!0?Tf*`qiU $W9<i~pZ)WPנ.58x};.wU%oo+-m39-YGU$) >e:+ ێe *vBtDx+R&~*S& Qm<;cΞOPqV;?֚:_,~T-Sx=}I.Bj.7٘u6\ 53K,gr|G>y YWړ\9r눞Xj-8\&o$넟#Q4=9O8;23Q"=ON2c3KO2&C9}N6B*0Rvg *8҃xB'~s:|B'x/{Ч\e2gМ s\+j !. A'8 #.±> BTq@9S)[c ֖T ˬ&?y1;Ib2-)%9qݘԱeEl1Jְ1lra%E'VT};;.X.9.:WBɜJ@|hC(8%Luܛ ހ;"%#σo'^"0K"l ,Li<=ϳBIRdMA? >zB4Bl΀Q{)WhAwYo4SJeu;ٿIZTCea(iG I ?]j\`ͧ/`k/8|~ٸW^$'FroqY xZ#&*FtN3uu0~f 4<9~6NȆ9A5 ?n WTEfFE<{/7QWXqyN.hr)JVl.:9vXH-EK*pϿ`qE"z*M̔:SjjnEiC2uVR!Rd˟cQ受>?[Q)`ΐFƥ3|m(_~;%Cub9teot]~BYsg\q~U7J@;]UɷTCh@5B7O0E]csT ,>ׁ -r#Qȅ`495{Fެ?*_zphD>AQ3s,[0EׁQ\`9v] 8v r1\Z1Y B 6Q ˵6 dED4TH._v,B=Y-A7w7+LkbND0p}M( L(̔Thl'Qb-\Xv؋eQzKiO}{Qk|؇b`| EdE3[;³ey+]Q80)³ O47 iwp "#A8 G"|}B@!~yI^Ʃ i ]"t|XqN4S#{9$ry AMMD^?2v'7osX;{x1P9+rP1S $ O|7ыygI6I[ P)%`9XL%Qq,H! ­!BYNp`$97\b<M6!ȍ $cS *' F` i#a ɄԒwuTR-,|D;Kςe,ۨ`ٝܐ&$^1XHeϧpowkB5myHk?Njm9^ra=sy4LbL9 [v^Ӿl'Z\Ɠjc,e M եY|K%ȚQ,Yџ* v2\}&LS,&RI.:&_.35lb>zq1j"SRu5'/F)E5o\:!&̷:0#4Α4g#XʩrHc= 6 yu檳?moRgΎx!WUW z3nTxv/G gHouD$ [Q.rkuW[ށ TuJ¦)גь8""P*#B(`݇ ?n5"ZMU{,{^9BSNϺ[inCBdS\V6,XX I`_(_] 5M'bxt>;Rݛkwgw#6œM( ﯀%Р!i`¬S8X]:n!PL)Z00 E іE| kx\q`nQgZ8Bno~TÀ?$6b{s@vֻCRCq3C+A"Þ_UףN&K}DVK2n 8P*J%w??x?߽OgmZ3Ix7wy`#ɚ?5-gMR/S6Lm/ћe>cQvHM0MSX[4O} L9<Ҁ鱈Ҁ e ۇ.t/6 449f/.VX60=7G_&d棌4zPM}n4T5|["7~ppKU/sҪ>+\I6d;(}4 Ipڱ /Y.F_|2`w3>wxq!ENƇ?M^>ہX_birZFjlt/wJbQ KˣȣYQ(kՅF‰~wn.j|nKΫ\˯[pY뗫Af,fqJ\dy!8DUD\<e= ay"GL*"@A0!8@F6"?$J^·җBQ_yn yAj#wApK gota*-%fdf Wŷ8pH!ogʕ̟/{J .J F}~*N,>?Ad($Q鳔MoyuB22Ƴ%Ҩ^8L%'|-EۣW#T[IMȏmo d:cΑK+9琒uECy RkÙqnftɬch "A.Y6]oQȄIe1y!W\&;T!h,Ґ1iB^9b !S 9*8%,N k5UʇNȭ#tZli0XĂ% ݇X}fU{tczD༗F% j)gu.Yi]g4)b湈9:j7ݵpb}u!^$ǁEe\kόL16JE# !%r$51:99 p/I *wIr Uژ @Eib5i6 bQW.vZ:^a2e$ \qEz څ㆖ װܝf3Uw}%N7*SӤX0 i C\Q=J DMA g;sɻp %PŰRUu;9` 1Coഅfo}"ėLyc >ıZJ3xSXr5(5L#gؘf#n[;(dqN$Hi3<0gV0RS11ed, qAvqp<Ō)p ) WӹMNޔrQ R&:UڐxN)u}ݢ vѪмB"LQdV]N1L6 IjSrh,vA 4;uFA?E&KY.i.y?K'Rppҏ"ܴ0 Xwd%yhm h-*L6 w JRd2,(n=Ҩ;+,JҰIl\,#W֦uK7IXh%}J#PJV@¢HQ['(i(Uk[[;(-Fbt0mY#ui˞zWr - gȿMgk8VbhRҦAMӚ1 Td*Ei6b`+n[KA{1$MV]ZIccH Qc%2%y5>{5ԑbT%քnk[XK*noeV5yF&(Z0/Q,j<#"dPư>K ˘azaBpV֠^E]MֶIIZQGPD͖7U@\1^VԒlm(jDBrVIs+I [A 8˵StM HNN˒ӹVS?1TZ\TG+")我 cTM.؜{zM&F{E_ꄜ%|.jŎ^-=,)^?Qculp$ T{8Q@ɍDn[m_2VLGn29t=oUHܳM'X/٬J|Gұڛ6ĄɬbT _vB|InŬ|efV:O=3{O^,mcMjTw-d͸J޶ajmͺk; ], vZŴiucd %$˄(L2B*ʼ6ymxǢClk~ 1TJ ΨY!}qf1ѳr?V,Y:2ǚDk3OF=Df4lN|̸jKTM-3~h3\LV ˕'g5@?yъ=|x"ϕ0r˳}~ąo}ۛZWvܑ)uRBo.bR[aYjjޓn6VE=NV4f0XۚFCUmIUqK#VG0h n\vUo,Pyeng:G OWWtl"cj҅LҮ}Дwhv4sS#mVFLsZ|H*ggJsDx/5-y䯳Wb,>:-"U`E v']Ȫi.|҅> 7)#1f62D@F.U^ƒWWYd쭗m{TL>^"7\lFg28Fj>p\=܈'.3P?: N&GN,S^7sdJ gל+(- ćVdzeۂ x6c9JO(T?7(-c]xR^1āxIZV1fae3r: O8.I':]S&>OvG&‎b[R3ϝ}~ͺ˛鷌|!*@9`:ynX!(eՁ&*RB ƫ5a9>/@e9!tA?)S3F?|+N?W-5nԃߢh([% 5Y^] ψjڽ9ӋIO:`5?'8{ktB.`$~@{?}zAI]<Qf9z^Pg7x dm2yv&J(J^ﶬ:ҽ Z,w^ck#ȶ` ՋW]Ib]{-pB}! ^(wZj)DɂQj&dT+jei6 b'鱒t-ZzZX.#MŽT>YO{Ӟg^JI*6Yn88͋FS,"稬|*HŽSq/Y ldNJ9QI.ssry<7nɋ]mo$+~ \dR S I!EIZoCg$jݣ5vi*"/.MQm ዇h xP8fչֵ8 _xU9DAಶH J6"Fڔ7UJtɆ1;AcvhFIqpڅ`zH+#߶'-YAĸ s$q GCv؄QJt<ĜM8~mi 8❒[C%*ܓ}2S7 ?&;+dzL68.ωȈ8bP>9I~rk|xQ/Ѹ^zƜzo67>KoY) _ΥXS?7 ^Gv.3z7}M)u\zg`̠I lq, >>#vU$ukA֋9|qM/eiǒnj%QZްE7v^`KsRKWNGu琍#II>'[p)O<Do"tjzZa9) 2O$xy2~—U :SHX8-y+/ + cHQ(" y?7EF# 7E;춴yvẘ44M8@PL@x>8֖2FCINFl5kAdz/>Y+HOW&..`hs! [$&kH /6Uċn=86u tLD)B1J|8ocױKoi5MzI*GC>6kHw|ЪX:}O辪_L*k޸CdŇBѤ$2n ѷc^U pN*}5H!}|VÂ=Tl &:Q-Nx:z8 S2'NC_gODU/O?eu :ܼKhR,qMI2s!+Qͷ -Ѩ|x smf(%ȭbHr sml8ٟ<3'׉~u^wVŒ]BeO?v!~~'\԰ࢆ5,X q.8v<#Erk0A%hwS.fE)g`j۳oogCz{"ϐ|rUYkvdAJmRQi{^94ȪSGm3wh= ΐܤcg9("{^8 vñ[O=Dd<(x.lCz.??݊w HGc䜊&.& *Z^ Bں2К)]-;hW==*˹1\n9#ےpR6XvnҸ{-b9,dZp|$QnQ~*m8O0nxC 5xwj2i\:zS%hYx@[ Xpn2XWK|xYhyK 9,/witn0AA6z2簍 KTy:l ǛG&78fx>ﶇ~盛U9q~Շ.o>=7v6ۿ_: 1itYs??~NOO}YOf84|>>} %9xdJZ*k屃+m΅$w짺+Nʇ>I!?%wQUw'. 5 'F)vl*A[jŎ d*}r>≋!T2l9Ʊ NnٖF/$A;Ιe[\.7-a^p7Zܪ <<8XJ1US8fF1%~S $rT'erKr cJnI\VX2'A xSټjyJ)>b!'F=z>SLNLf0rCL85{LsI!!)V bC6+1:lM% <=0[,d2VX9/b8Mhx7p.3m[6F1B܉X\6O+h^¡NeH`D9i=y¦ɳ3PTM.3y/;89}S=B[\"9㊉b⹐mzeǶ{9EgɝnoM&-p{{}r}ȶ!޺*X;T4dĒ`D8QcP#l }kZw)֫]v %9ŻhF-h|t)HG/>ǣg7K4w:N3ڶG~+IN࢒|&̚SUk̙{l9i( bEz)KRX'aeaȠ (OJAEQE)!Vbʣsɳg{=R׋鷜4zxߢ{cϵqDN,hP`]Ց}9GȖO=d'7ҺIP v:D7t( A8ힳ+Zܮ\= nJB K^ Ѯ6Bg[c#mILK|USIL%;\ ښȲFl,ٔMJ@Rͩı(Owht3e!-sTLwW>b%g¯' ]wg/rGx^ |˳f~1/| 4(?b=s(*pYݖ4s u9BKWaŰ(Ptd>2ZPA;]Ǎ<]B78A;2pANAz/-DaMݐs$d!۶M+NZþ&c<(ArpGcom8C$clXc9-gra^)EF=hig-TՐnWk 8@ݦA&{页&{$:Ez<єάV v=f9 alC`b]u)=x!hxpmM1~C5P qka]S3խb ⩟ft@7n׬C gf>6 AAznC __ _~l,^{2 b+ٍe*"..a~\W!Z I&5 8< axi*%Gq,(jr/j @QM{B IHS)4׫Oi,PGP \6 WcbWN.ިv,آ/S)֊I(D9+A {O֛՗ѳ:{ =rgқDBLN&D$^(;?^4*z؄)z+7P;|%)^ 'ήTT4Tz+>쀐(ݎR[`Ru<ٍBuh+@*짺wt꣣i  օ7@Ī#`X旿o X#6`"Q! Mz cXS?8\-Q!rW;/h:MwMC~ )4shZitH 0h%qggZ–4D*er3¬8'U[S"aHrPFňfL n<%`qΥ(s$:%HN4 ow2јbNs PWt-vrB,,$0at1fb5ג:P>VtüNJ )'U;OCbߞ3"36V!w]|+.B=X<<9d4T͠wj1YU50ɥ ?~]fD<߅s.4j"Vyg- cSF{6$ѫɲ;͆Jp[%;^`VQT*.Zr;ߴu Z`\+f>#kB;opG4L$5n4CFԲc1*`a#2DiYu`ؾH@l.h>XI0)}!RLkxk UX0L\wG+q!4IV(h ò~ښ k~淖6֘gv]4AY"To̓tf- c_@O*N#vLӉV5WhFeL 8 |NU%Z0fm\ o397 Tb-2Ja!Xg*Qdtء\)#%'J\jGF)bp `kk>cɬuHDtǟ~G@R,_pXNFgKFwgGٸ,o..d^:XN&cyL|^v^Z3d7LAРd15ʏ CǤtLB^Z"UV}x >Qw3SG@xM}NQcu&2䰨& 4ܰRLXZTNUa\R SjA Oy6yݧ nNH5XtqkaWwߔdk"P繵53|EHA6A41U iT\ R O|X>9HK-I)R)VnX#%4HDE티\pjN$bT1F>yQHJ բRL[)!Dj!z-8E# ƽy;Ө'd+j#vm{-c2fbĊեۑ*hBT\ nUf21ʄ9DS\Pt%c؃s J2MI{2@8Iz[-=t jo|~ceXwsNwς3T[˧.vc⅄c"Z7y n̮E# NO^:WH&20 4Qp̤El_X+`]ni{@h< |QIC2S }*4R 5 /: Nmox;޾̖; MDՉ0ٻ0xw;Qo.ću:ћ\HRLk' ߻LA;^?qnyիs~{h ?,Wc70f4΢?X(RǯN\]pT#&!V҃,pQsvBnɅ8z8PA,Wݱ]l+Xtq[q#^ FRLu\g:ϰ.ř camgs,29 x|/eŰ;ø(9  ec$'ȑޖ;!A1J] _/ meXl*wTqZ[P& I%Ʈc[n#7qւRA-x[xҤY.1RK\ [8*f 1ԤHܤR)B0qHV%FҜ8᫝: r`-VlJ 2r6MS `Xv/Aa$Q ́1WP_$?s vI[ӅwއOq s+|ϛ] y#aY᣿oaengg $_ |\d[3'?#ޭ31_=;wk}NyO` m"H/>X1gӟ_ǒZU-jHJx:?;{6gz< =aBrN}j:3&jvzW y?$ͳ,P(BO_orM',am/g濏>iTLBoNjӻ_2C/ 7^lc˳>}}]$ SEe Qš7Y'o˛ f-Bm<@+~2'N dRVC17iS:oꥎ~L C yb`Kt.&3DR1\y/; !(} B̓tP7we23˛0aVH1Y6/1MA( \vKѫ -,L_<]: صWgwup#>V Vw%DFvdb j٪EkE*U5}ebYӝE/)q;Z1xGyE p ѭwbqJr_ m&8(l;74:#27 $".t#|bWNNө( mFѹϲ+om:\N:WΕsUSmfB\}ЌpRib`˘x2SDfE6P޵6m+oϘ2Ov:'3i2^@Gnd> %YRt~H- w],bM l+&@*&@XED"D(B:BKy(2:&Hfc0, IDDۈ_T6bŗW Y[W6ߕMwe]#b0(P"`&qs%y50H*ʑU_TI?]?^`Ӵ16">Kq =;Q&pX9fp;&kww.[ n ~TB8 A 8j=k9oc> -c}sU 㰗} 0;>;y~aA}埓[x ˗/ϯ}yN~ {7l:1? =凟~|g?~~󗿝fN5lkMwln;ewbɷM = }y}O\VA7΅$'։r,ƯdzS3Os9ױ?$'onŬx8 -ٷ/^{koY̙]' f6-F~v[R05هΟ`A2w^I^00OSh _MY(w+32iorK,xX#k >_9}~v4̙̓ퟭ}?'/]ifkbye75]?5~wgh&4'>1cnaDj.O/C LeR՝f` V|y¨N`m+~rpr\~ܼwoAYMcewp3ɹOgba?e8x=N d};prx ?/bÛѼs0Τ}ϓ"91(A!P%^߅ICX(ht>OLAgi؛Kwo> &g+w ou.HL/{b_]+,֤y9<,q.cd!{=GAu<x4L M0po@5Eхp@MyvIHs|3 f<\!eܭ@XSw4O3(],5gY8ۺ{( 5f1ܖ[b-'HDi:}qjbDwl扰hce/JԸc"{L[H!W{;5B"V;sΫuC~3ͯA\7(}Egwx6q2%8` 1޷g]2fl-8z]Ip#84S3U;l.sIe,BmTqC5 e+ hI@$4fNuq8d)8.oT 0 UHS\Eϖ5)!* T m `4<" (B(IBSHc2+Fz?wOdq@3TFH[y*O1BqjPń%1VqRNq"8e $>PFsRд0"X3nR0gQbFR#X4&)CY2Ȁċ9s/iՔ`C50 8&"NH"cI!R(SDB% bL#ƺr*~UG5?_*T")F&LfD*2MkJud0!&T'*"|O=R'(V]U;E c4,Ez1 A%>ǎW%;]\ܓ70|vql iz4KL2yW U#W*KPZ=TQ.\½)AFRVQ.4f":Uaɪ G"@g5_+/dO /ݒfٯ$tLHmF ih~#e`Hk7Q/h>`AI3l[dO; Mоstf:ٹn*;A.`;9.}"sܚG攂õ'e(vk^:Wxn;m6|;YB#%ͅeJ"C7WNͽ@iPDԸ"d\sV޺qan&WV)Bn:6ph2&t_Y/֍h^/%Xd[y*MD OJ%"-X\rx˩yUysU+|W=`L!w a cKݚW\Q >m&KUߥ†+^h6X22]7um%N'Zi*Z z3Qgr%/砗>=&_\e!!$8ںel-Pm_x~?P!`˽&'KnwBgBtj%!W,*bG_lo5pÃ29/?̄h856"'1iok&;ryKa-LѷfJ}Nk\J2\'=yd x@,mԟPjjg8DV9,ѼuSf2f 7G$TBB0+CrpZaK 4U+8TzZH vQT+v4@T+yP:&ڃrJ 0GP1%^P[2wUD9B^ ~J֪蕹œ-,V ̄ҏ25##0 *\(4*Txj\&R8f8 Q(6f"wA)L)کLE·O?YMsG$teh3(.Ij90,1װ(;fTX*JFH1sRE )fxF##JFJEhgH%aMźX-Q#fA:a)")aI&_Vr%b1rd"D@5!W殒Fb'ۦEno!K{6Et2}~?>Dߋ5XAoU?F>V]^ּ=> r0DP0 ꅗ#]lӳ~8pbfpgLRB~~S*tҥ霧 F#dP0 i8Un^Ո`D+uv31W3wh4;q<D,{p IE&I@Fc,u;fhU:7b %DsXH("-x |ؐL6F>:I)uuX%"jR"F9\(-.bлWA!Z'H &pWOx$x0q\Fye Έ,R5! MQMDF,LQ@4M("A$H0tn+ªXҡH#\%j(~u)iArx;I`y ?iD<>=-{ ;'gô[$cia͓:07ik<7M\ȖwJ'a(Q-ӧ `TbQczO&`l[a-T0~* 7|6k? u]A_Ί ҭd\Ht(yuv IstRü6rU ""i/1jֻZi6&1ǝNֽXnSŕQon]EFRp!~Ⱥo8Vh{C0A7|B]cm.! Pn4'3gN]R)JsYgiϽ̥i=r#[f^;PWJ,qFT>HdMr&4NNi"5t7}3(\ QľܱlJ*ԶR&2Ѱ(ARGL  *R,n#`mi R%@ 78] S_Th|σp׶Ўq9F_:;AYƮme -KwR LN0fd@\罼tkO ٻ&n 8F.bHGVPW~9]JH_۲޳Fm2P?5dZ3"8d<~ 󄲭KQ^%A )"X`kͰ ?b? ]?WO)=lkOkCHգ{u [֍G=] V7N&:>|ʗ{!EGP쬓 2I8B(iJR\(ԧ|KԦK8-Z/`X=lnvQc` D+Ƚ|qn+$% JyJ(NƱs 9;dq$R@G)(~wց@w@ʣgm>zB0 n-$NȬ'̛CxJt RFF8ap"-2q``JMPF5D:$<+DqJ^a&cr0TY̢% E%t$]<2ZɈY-Hlb% 0aN 61S0JR$aV(S("T$FHA2-I}h.f Z0ŕM٭3IŌ„YnE ±P 'D}ygMfL AjIRX)tD`54%t =-Hui\$]fsϢ+?ڀ-U1I$牡`z]LvHAkb )%d@ */VS0ED\@x&&(B8[~gi{ ̤䙀dJ.dfbM5&IIp\V)`s>Qᨍ$ /2USM۶{b1Kujw1nB*m`z6escN ArjFa<z c^VCFuqqMBVlLo}.?sn`*gcX;[pjܙ XVve;O*2ڭםEֽ-VQۏY錡 4GfMwiz$Pyw;4Wkƹ6#0Ojz4QA+Rնu[^[x3^%ՠI0nh|{]ݔLT 󊩿ȡ_yN I%2a6w_%ے(N{9f`@SAZƘ)AKͅ_5s71)S BzRs46 sc9UQRoDyFHB(S99"1&p ej$7q8x^JLYoT_`@*4 }\\xL8 ޠ}Rdw}X]Q'&MyA==XD%_Ya1*4 L9Li#.]TL*g6ZlOY.CJ;%,Y.@4ݣHH"9'4bƑȤ?<iߔ9)G=vE#Rxf b 1,W9 rDL(rʃo``U-ݙPWևaLlzq`>Aݷ\y'S6Tx='($dvzT@+4#y_ *-r81zRu!G91n8N<t @a\M}?\gSSho|A\Oorba NmarʌWxP#M %[~ :;2 ]'oAc÷ן9s<~ȝg1Cσ7NpAA~92<.~n*tXh' hbАJa*q;5:t+Ajc^~} W͐.j֫n#Ȕl_ wd7pNY8(JO,Ü'AC: 2QE)]ʓȎz-eNrnn7"$޵5q#Cи_\݇݇MJ5`dڒ%1CRCrp.$RYΥuh_S^M*mM=ibZjz蓠&q<1{N`h4N:nvS읧f _\GRE$i{Lb(d>Ց|PL[4F+ěy|N_(frmL}GU߈x"!w'uo~yx6=]~ô~M֐fL 5TE(QnW4N5HaE0(*tXh4AS/P"! J5n 8Z% q2lkmc~\0@To,`L<囁ljODϒ^x2t2ghqtTN\`Ǟd֤wD}1\N.l"g9F{V 㥫pQŜ| M0Dp  %ϻ@Na=;zv.xFKcAb9? -Ӣ B)ib֣:tcb1;?PQepQyv~[EMl݋nAjxǹ*HR\KT1%kg7Ee.'j"Z}b*@?w}E"eGJepı;`Ǥ+/Akw/j%Dt5^иìbȶ޸I[3c % #2#AX˰_|m-D7Up3Hp_PUpk5gܠARo)*!}XbctAZ+]1Z!_ۄ{L]+[N F >&ˊ`0wic4N$n`c"(/ZSLɕ"wb2V,{bb7rGwZxM_܏M|}i|6lwt!lNz&z,ݔ`\eX7=/hn.t xp;g'SCdXڢ$)SsO-]j e>e[O49[28?[omمu]^ax[mkݚq!n n]:@Jcڤ4BL[$?RkF(*.,u8_GW2e>CtY"b o+&3 >sa=[g^A3[%`.՟@~I[IbRtƑLS b4qi@BÜ`̈́F$!21%ߗ4fNNUK{ `=.@:!8VS$ E׈x)Ch\ÌE"5b\5\*ѨByG= KT罫l]JZP!L s -tqh8F ˞\,bhOeGs}6zMzל" iJ]cŗ- U &s~M򞵅.:X@m3U[xh$ߡK-Wc *Y]irb+cܾR% iFSB@6)<DnҪ5viՓP>_m^n%>=9m?lܮz J:ƭk8Y8x^=j|| MHs4zYpԵNaW^iy߭ zuf#I?+;E ;dv.1iaCL![ C: Cc}}fQwOg/fwԽg&ŭ풻.`u k3y뒳=]44k̖` [\\SVf/ ?Q7̨suVKoym3{~_whKoM ¹xbBp\pڹޙJ w`8Q&O>?7YϬƉ+Eϲ^(Kpƞɤwџw)wCYb/M6R.4nZu1f4锾2^2IN%(27m'RQFpKʯJӄ&UddzH@&Җ 䖞|S] J9Y7}%A`HhXg~`%p;+E^V .͡l J=㥧5ᦣ4KLpyu|sgV]QJb!,IĈ*ؚHk±T.䃶:s'7G2geI#1[tҨqԸ;Hx}~'H-jF}[X*ؒ."?p5_l;tT2Q稕:˦ :..`%H;R%{FH. U8~.JJQKG\+p*q]BQvT29jӎe": l/KC}4 l z.^"-A7?)'ѺF%8S-|ɪQ&kk*Ϋ֬e/)@1m~CwC)m۵\oiB"Z7)XZ-ZB Բ \r J mG! onA3~oc [6 `Fv2og OdW^ [!\qӻ彯h-$eI%o_1I v,{ƛ>u&""B8ZS@S?"7x뛫A: x]=zV ʋw^ CCO$(jp=foRP y +{גϓCPXu5͇w][P [eIx<\q_E) ccM]bbbbaaѠk_2 Z;?(Ŏ3hﯮJWU Q=9{fS5̐XYxS" Qlp,Q"V6vX3׭qj@nGwnƇhMiTF_[ZsHڥdC1Cǵ.1wk!0xr5װ˿4\S}Î!N$xg mނ{צ4nh!.@{ؒ}|xǰ ˊO_P:Cz9vwdͥ?efH Afӳk̴ڻN!A;u*io&j*SBtkmHbw3r߻ˀlo.bI% 2THʹ~IJQ8\8$%gzn]]=f4ZA>^\ G+ցq`qG{y`5.MD\udStޡ,+|]{^V,tqR*XwzZe\\9ѽ^|6~cebeAƣhm>~>~ 4_*8g**:1}{0s{O7z5v.֌cϸĪ {@bC({u.XS '&ܚ }UL&5tZWbJe#8MAι%B &q9FGQMfP, q1NQ}M>]sRkrVp%eb8 ZcQnd .cpsusݝTۙuyá +!0 #gmêTTWkSk .ߠ.\ULeVbf_OUmDS ;PS8lRFp$lR޽L=L$A*/U*/F&V"M|a)rGWdosKe^t@E >39|񚤔<;XQ*ի.-j@7qֆg@ZoP`qj 1Q,FbX0ڇB)1ʰ$8TG0+|XВph6 pQi"\L|p+;-ﮮJn呈,|!adX߾y4[yԄ9C;(٠kJD)P<hCSdbuI1mm:cIF(>faժ1%! l4 ֞4!@ÏQsB)4A9K5IV T5Nq E+JӢp Bmdc!UTqB P9iۚ컲bA 5N-B{tZ@1HÈ{sKPK7QB5WUӒ+鰹de/Nf8Õ~ypRT"4(Hp:Qb>]ZVyZkَ*#5Zt0%7=|*4ݏԒK;lϣ~ srw׃'_뇾4G㕨&J cozN #?_~hI})-k_ce[7يVT˶jގij>4KUFjaI`pG(% XNJUyj辞 P8LU'lx P ]ڶjى09kqpElAF]Dԃtܭ- AJ%6}t;oi2yq0E?:R`v Jm6 g占\˪l(N[M#Fc/t>c&zjHI Ku7=1NTc7|_V\m^Ȳo:0FKQ8F,ͮ#FuWn*ݜxj9;T:}΢m_}HXww*y (|<Y%TtEjj q+ſ3'<FH cf_ CqDqzܘSS #KX' !I{ w--L3T_''X yc`UZ1ڂ=E!\0I)Rs)૒w/ޯ2z/ڳ!JYJ^ّXGq?,yF# ~3yy~CCߒ79}:xct3I?ww;;;-'qQI"dpg)a"S!2.yaAE 'ϧy- JOÓ >P~# ^Ӽe6`98 lh^HU65+UߴW8V,$ץmc*{V9xtTsu!T~lZ*P${.=ye_H^)Ҕ#` GRA<3%DH >/gK5P Fɨf|OgPqpʐ8MTNQ}Z*0U\Z)9-ѹlD1 TRi ZN?̂3pI j~ܺhJ+e(I!~_cp#%PjGC?@^|e8N?+4ЭhDb!@a߁A&p=RMM ^>;>xFjAPJՖ'Ccz?,KOmH&Ltt;0'^P :z56I ^cFiL}B*oTnTL)-⪷_q&C#ZEіBzoDDjCLX'^^x8 M)tJ6݋iZn;y7sI1Hn !'^BrWXg !H ɠJnNV8&g{ttD3)0xu'g'9o3fzvRk͉8AK@7tQXTBKu5Fm(5 @m;>.,!.#92ɩǏvI=8n֓ !6{M' *KVغv8ZNѶT*;*-ʛ0 t}.NSZv_;v:Ri~g뻛\3ki(:ovʸ[+R~hMp\sS\uo{,5{eڴ}?o0 EKh|Pӹl]FOPƓgU\^8|?TF?tv拝7Go^ѽY>{UhŗZUs29-hk/&tR9IFb`F@0 }T/LM(@D+c΀TpHCZSfm)\aGkɎ}T =T=!S8fQ1έ4jJPӘAr8ZT Lphp@⨁ȪJ5kEࣈTTZ3 %8@DZK+΄VZ]!a\.厢ݠ dRAEsQ_S.50]߂Q鏿cN5\G.9@H܆~]7ٚY T$Jj|t?~@? 57p<*P^|:1@,ta&BNxE +.X[?lZ`*-oK|T\%;a*$m{*^b ag3F0lx:+r«73EǴ4]- kEMgUϢ%:k#`?+\ k+- . +]PyO,P`y QvC.uǚ08+!5= Tn,[\ݹdFSe0Ʋ&5vNApyx.]io#+BۼA6vɗ]lmkW I#ؒVjٚ-]|X"@XbJ;CqJU00bDiI)DD R 2>;u3'S){? ԅʣ;xpJrYGg0y% ZXOϷ y䧉&DFᝥe_ Z؟TWP?L BX7hwA>ҽ|dq K%6Ɔc@[' iXW/ ; qT6KZ`M˶W _X; TJ( ^56fz>4^)C&J`|} vh8u2"c< ";=B$I:њDqIIRMw\n$;[թxyQn%kuG iOmt$3/0buoD(lyI1 rhM7᪦ݯ-ct&m%DK=d)Ds`h_!FN/C4iS2d2H{t-,A5g'LqhIb$7'/erwWU 1{4)3W_uX(VzNˉ) A#TR̬D6%͛"tt-;8꺧M7ս>]P!F]#[ˏGb$*UL=Ylj(,a)i=w~g)4;eIZ,V`2Ռ];YָQӸAttk^NΕ'>e 23UJp' 7T!b5zcZH{:VltҶaIǰ<`eYRָ1S0aaE gľ o+]2T1#bfB;02ZXœX83spdYA@ Z V} 3Oa!6lKc0Ӱ[D  0QYc0|Fx|( D0Y 4wG&j`RrRh~_Z$Nz,V/Tm%>GWVOo9xptXh vY5,Ч8GRh!m0[c [0,#*DJWI׸UwV)$9QښB\CwHR%B-5<|A@w/E)aJ_ʨNeE5f4J%$֯K&6>i%)V$Qrzo6U."T=+Ff|C=NQzPWƸE^KtS3BEq¿mDg'}oUi |]ſXHrά V*S:jXR6èRi!?vm*,5-?P. ZZg(+Qcw7&t%*(?jWe +9γb un"Dd+ ZYծ`b ~`IPkO"Q<TJ$8V  %PxdQ80‡SLD`TӮ1g^LR J)earЍ:MSc Y3\],hn4gd =E/ysh45Ӭ0z9)hNF&4Kg`9.=<|%e.Y^pR0D ޿|~Q`zw`_O}x_p,Ozwd&<|__;3|7 @&# Xv/"y$̽}1{CM_ptS.B qpRB-b" KyVLMQ}?F|  ^}/g&?I˲y۰lކel0QX*L#Tp,\]PSa6Ki^*''S5!wK⽪Yb+ia,orW[kbSkˉݻ7a<݆fBޒQOfrsqٌj 3Q%/fϾ06- >.jxcP}kZg98)9OSoM^ & (35x^>!ʌtʜ^gJ%8a1EZuZ2-yDTliGSfR`xl f1KPN-Fxr%M%Ujh-a15KNƒ='x9UV <> +l:]j7.IjYYIqͩ?hy RT>=V5D,7%4ξˊz^)4,z7aUje&K~ु o6Mqbe0=(яi|mhw䂇3lwJ?/,}76 w r) z% A`~mMgpy/1Ei5u2`ѷ?M_ϣc4pe)tkjoϥkp @ejrzzSֈsJ6wMzH7& 2oK-qaͷ'b:G,:=;޾yztZq+Ç?F>Lhú1/R z1=M߱<6:>?n:;Nw,ؓ^IvU̗վ shf6?.ϋճ& WU*E[/DTPwƚ [S RL;xontݚq&z6, 7F6U81m[jf]OX(!쿺)~1!V;P`]"}ҌHk|/|=Ab.O/> %NJɰVR\zUVJ+ա^((w2~':Eo x YcL:O6ۭb"qMX?w00 QbCWH#eB(!~x ]U1o@R8s4k͡$ l-_1j"Ԥ3pew.k \%IB ۉB3`P 9Ϙ6ce,Z!{0Xxsj-𯻜Y~PW0% KrBZolw ^O_m2 "-I&~`\]bSL9DUahO^Dte¹3{}M^kUWV^$1MP_0-`N_+9)W+`&|b(fvCS|ꂥ9l''{uW0X`D? 6#\!)G%I"ZrV\("1H)tp,T<% A"h+ };+  <s켓шDFn#RI$HР@kAPQgd>6|şoD*htfY&웕܄ c%>GXHIl/ s(|B{'Dn7:L) ޗV {Q=LjJ%o:!*iOǯM9H4X%_VUUk}}5㐫(x qXYǞ/1M1nbqcb|D-GH aX# tq.YEx3nZ! Vcѿb{c}][6Bz,wm*CY݆"/wkHђ3GqyZY'!ٿ/>p %w.oqtn"x;=兣XbQ\$}yWW_EkOJZ{4+]2ƤGsI r DDHY#A(S†\ v hA'D/v)S:ha^jiDm{Uuml7߭B%| |n'gi'&(a'osH߅lQ5 n{x=>߄tog+1cƮ56٭ K9y66q(|^><{ofU`+ӯu6%kht'B\[`Jb&ncL?:حnUZR s VRj"'nFIC`B)cyʃg GPf(y4(er381Xj.|9!pfɷA 0G {760KD㺇jh"T0-W@7X1v Q:)gvxvs 2a≃0$7:(0s Cf{\-B*$w@28*htu0M0V*< ( ` k3!̂XUX= ]\{,BXwzS).Խ\7 ֕Q=|Ū)(i\/hRl/{)7".%ݫ2<ARu C *ӯO=0.yǐ!;8|L? y jYo<+IfI,hgFЄ^WT)O!fsz^_ݤ_^|ȈWmM0 z")20=m֥ںwmc`90-.?R׺e-{ %|IZMNô-tv7kj @0ۅN bmt||w fL ;TNZP-91> tP1`&RP@-&{xfOYtb]$hr&_tm=ML WlE_:;˝77_+r-;s?͸ W@+r?Ot@.6<!Ho8C:e!Q{v'WmU5uA)=b/+ym1 AFRPi\I33q0 &BL픐;GIrV$J-Ƙ($Y!V<Ǡ0kbJ+籼$H#$ zIG?wFJh YveIvVrH``u+%'p+ h(v@h:4 2tk$BbKd&sqJ";{1+\d.$R3E5%Ija|) K$w$D<H9U=Sƴ&}hHIB[NI[;P򄲄xebOɃ N}8G(L2%CT0/0$U1-2E>. 8n?TJ wh,jL)Y+$s 52I*Cf< 7reҺX Q!jOHiN[n^z87Esr&Had 6(8zC)[= g@C} 3 u^~V8%>ى _ 4QsPքrMqgJ$u-!'k2bT];QygT__RYrY}ݍ⡴Bn! 52WH0P$KuaI弊[oUΫ!J!hOnoKMXPc`R`m>`J)u`S%~LRǔ #j;e,vמ!A"+Ha < w yFe iAβBe *)y٫fHK{iqyUR/`҆K\w]e1!:㧭ָx%.޴%2C j%ApV\>qy/!WR[X8X|{HkqN*:Zڎ!F'0axAnW G(F%L@)'t9S1LrpbȽXZZsYNwNıD1LTy4͝"<}S[r ~gm)s j6 9Ϣ+ky?8Ma)SY 46 y[뺒rW L1JYfEB[ˑ윸)>;zQ<9%5}0^iGO,,.ijz7?:t~Q\F7s7gb'LoLj9=.8פsY4j còza|gmV{O~rAhadF']I^YEiweq6. AgPIB41w7 ʚPIKY&eDsD%RYn3Z \ 6])o;N.`a-TR8[^UQ{TEhRBteTdžQ"qJ"a$BUb)3gKpS"h[q&JUi~Y\&X5ԩ}@ V: nacޒ@rҷyLMA E`,_؆![{dRl"0+ZꦃQ۹fi{Mrq,@v^c:F7M'<9OrI ?K[p>jQvTmZp^mf3^䗻Sx_wnz0$R=]Y۷AB=`K4;aD;;q S?n8ÎΣԌ{֍d+O NE ӂ3rb AWL.!"O4޾y~,?JS$sO- Z70<]* z=pla)Xݸ< SHQIR,3ǹ3Tn[>hʍJfs9_Iw׼CF(8ݓg" ܹ6MjrψGj.?j6]QSܜu(rOlw3v?T/g;jW/gwŻ^Nz\ FEr(q`e9(k R/6EZRwkq_._v8YĦD8)WUfu"siwqĂ勅6u@nP B*"8bryMH#V$shSZG2aXZ>wpzę@]8DaûC͉^j  .zep-)?$eN] of?s,4l4R i,L41i s]OZ.9ݟ}י$] "*ahAohoJ`,W^$)h!҅ڼwBg&y)Nnb~6qV?~ `j}vI3fCw1e F&/h /Au~;>_Sl_ڿ>8sBZ}nu'`IGQ7:dGЂTHvb:.?:eIh#w,3fg;h?ypɫ翟E7//xԍ0G@lQx19B3}{~uz31ٷ&ikki={߼{M^Kϟ#y'7²lAׯşYvjL3~e^jΎjt?'qo0N.''6m^I}W|6 Aió/C`X>^996s9S7Y~ŋ+r?p<,gc«t|j`f\{M/Mx2{M33 bO¥@0uғY#+|%v(bdQۥ @qu`:l?RSQ][+sw9ۤrUh[ 2‹*Z`Pxe<,D5~5+U1+Ǯ"{-ĕ2s2XL8C "KC% [;^Vf^r8Hq>w?‹6kx01,n ai/-9psfq"%%&u,244nw(9"ՠp 'x4"d˓82q&Ab$ $(b(&T1i4-c=a*64\3xC5c=͝Jb$R{(Ί4%✋]'Ň䘣ꏛyPO׻l|83HڎNRD*%:8tM%)M?=E qf(Bhhn1ŒV-gE$HZ>ȋ>&C"QCyy`\'m@Hс2خFR 9N~ya@hfW-3aWk0qF,6 "NYq: ~B.sy_#AĥxQk#1o{r("yD@C;kPkRqKZYf\Ժ9؜Ւh' IkvceY&BR'߸d݊f3bJJ'BѺ(^%]bXÐ(ڀE &"$Zf#kXaPl2NJ1#o05"$A\򡠗;ĉb{|DG%4 J D0?QhB+p߾)6tH)*ٍѭc1Q=Yw6yd5$Axr*BRsLR+a&r9D+O c5*kusJ燐眭P*άqWzw*ߒGV xg=ē_О&cW^,}ۿ|ym 3lU{K4w_jcP. xm1sݷj9(bhLKtIʱ_omV@lM>t5h8:nM {hYk][oɱ+^b 988 lכB_mf%J!){`&%jH΅C ^Þf]SƄ{'Z].JzYZ0?Q]:d\ZCx 羻t2V hO%`Bkxy;3E3po  ]ٛxM C)SUraag$;ޱ ,gQ,g=X9EHAfÿW?-_yo- 55{,\>;b,(Izׅ^8c]:DpS&ٷ|tt-IՍ3NjUN _N5=igL=>ӱ6yGc‘nJ0baRY j,'aYq-ĘYm@2 % ƪ ;JJQDvqYL'P,Mc GC$զ G4]W4e4Izu:ٳոrl,9ђ*+lRr&b;29m2yoGejA. /YQ9!# Ӎ/), .ۭiJ8..w:|G-~GYᨬsQK1 G`"Ft׌6;xw"+dx%R>˻ m@=Քݬhץie}< bClpAzƥgB Z U #R T1Fqjj}&44ݸwM:ʪj׵R)D7rxq7}po4xe9gPr^ro2"LsyYy,v6J=r6Z,wf>JVϚIvcӉ*yNAq휶M8sP(0QxЬS¼] 2OkcΎ/j1 lFO>VqFxC)?~^d?ݘٿ [Ιhf;ys2ٔcJW|è1 7Ńi ?p{ō f0.fIpCB^'J`ļz}x6܄| K6骙;9]E0ڡ J誙l9] 7tW tռewc,L܅F``VG`, 1P0fY3)(sb vWZrQ?%^菗,&#.JQkN1i)IV'Ulh >f&Xm4\MYxi$Sf h#"FHHH:[;h=j84DuZ=$Y^#=21[ :'Vnko`(-(꼌kH& $gqFXN0)dܛ.T1Ww.]B;S8{k$`;"=.K텟 }$)%=e=}l(_s=D&j/h1~H/M_ή.~,lC?)qUWip\C n:-d{LĔ+f 8Gof I#J /3~ ћvq_;hE/Fو!n? axh)^e<4/?X%s6|>F{*jdBI `{:He{gȍmE{#;%/MB^{TIc]耢k[pSS z8kL[:r9IrM-D&rIu$4!Ĵ`d'd"/n^)E8Kmv++# D ڥ7,$bR+S7j hq⡂cX`&pjKN5_+"N'-ۦzA` gr%d hl+$`\nI%#JܡLKnpXms(/ k(T Dh+x2ٖLE\׏Z.ӖKew^܊L))Rk@n<\3 +\ػ{{=vD7W)YC+3t`U8 N{pn76+p#4zQ%{wOY뽆.zmݳSG(L"&'TX{U<,؝bm(#XBf꼕AH22,O=&!JkXh8!]H{2) qJTrq0Zw% hWgk/X͏YۙX2QtS6g2zJ!:~52긮l@ҶUl7] #DeJ1ƒ%0υޔp=:GrRq@08j! n,pUs(g#ƒDO#f* 3ӗ\j+e1W~7ua|}{|Z->kbbMa߄ûv%"n^ь>N #x͸5QWwSc:9\0Mca"3SnRRSc ֨Hi\+#tqqG "qZ2")1)/RhFp)W2F .:nѯ5Y&̓8bzkN" 2&)}@ Q0Q"D[T k* ֳ`cNTc#mQ/Vrk„ȕ6&0eJmj0t4 ߾.M* PQ} FL3ee(H ZCVhoeXϩh&tfVlVN|+*}a.Т0B q"j`z`EIڤ2JS M-ktPQr8)2ZlL*\mMRS5D2+8FE!`p(A-s^31Ԋ:HV 8y15܁1j\Eϣ@EEj(QdvжX;!.t(L)M)+ ~UO9P;1x \M%~kCk%-~ikڒ>mi ZmјSŹTq&(E?sQ)\ət6Ĩެ BW`Aj2iQptdU^tI'>Ui3hq6MZj:鰕N.)zT4;5$ӽ<n>k"/C;=]w¸yÏH nE2e[X95ƱP1AfW{D{+m弦zùQL@ET&Id| \i#WjX2¾\Eӎw|;Q. 'ETt8\pʗQu~V.jM6 Ζ bs6[(PIyzʦ=ڟJ %+"\2)'oCQH=ʵ䦮~UG9ŎwuQMJ!h^Ɍj77J1t:P5֯PQL7xL_o-Q7}KsUKyTMe{)&iswWF\`Mo֪"/nh&%^݄e~c X_> =?Nn/H/o_P*yf!Ǡ:84M*wmI_ϽҾA^Q\.c]dI$3KRmI6)bZ.g3;;]0_\*xn4=N:k sn[.Ant#.I"$zb}x1:2 Ot^yOt^=2 1u64+4X2uSCD9#WH.[A}Γ#TvQ\8`! !CC@7%@_2kK%~ئ›d#zr\h ZƓcS!YQ-鹛iXdOϒ Y,2Dc~t!߆gtlc=Ej#.Yq:t70n>#¼# HlnCRsd׃xܘ7 u,9&A~j `Q7&vEL)=}j#xoTdi11b3\ ``4Ӂ1(&RlyBlf{A˔y}-Ӻ 4Vc=6r)+>R}̍f e$*s㶸E_!3<8LUyM 9gT儴}y5. @ 4QXsO?v7V 2iFZW#HJGu'VjXb܂WJF7tuKXްC`艙]>io8{{~)gl5"L&nWakjU<\9@TmV~(r%RU%MʝJ,c {0:YsV"S]Aҗ~XAaGY$0i,גRR5 cQF: Av1cSR`?QNa.B"2\ܫ܌Q ?].[wbz!ς(3kw.n&U8*YrtmV.~;~I@SkxѽV7^Em ,]N^\-Gﻡ+Wk$Ys.4%ĿuwVnhwna,AT_ PkJ aFn0}Ux@W߳LK }8h[Zb@+,j4!EHǂ8N-ah'DNJ"CDW#D⅚n2N]j340^4U%tk4U5 @k {qT% {LHkXCHRe?]|ޢwEB &o,)ѶJM&W!^F3hf'u/ k2_a(HXQ +jb kbo`ܩQ a ,FuBHu1H:pq LͰ[zfnn,4X% .Iʶ{QgjjhKM)S-g{UTZ!CQ~!S(#ѐJ8pd3ħo Q @6"RDb-:jH3() J J B`^f+]Oc$TCW@]Pvq+ߺëۂw g]eg=9nݳl#-wOgz,Zk׍z*7/ n|V2XbF&XT- ɛ$fQv 4] gM+oI*rgmO|* (㻽]sg9D혟/`ER:MR8 EO2e+!!9<(W2u 5"Y_Q[Q8:?تSd a糙< 'N*i`Ol7r5]2s1_+/W^z^7D1E:8b d$UL#rGbBQ6Њί~y0:̧xKJf3;kw ݧV`<8:1Wc(rTb;PFE;ą^C!@kZj񧜢@T;5=EIE!"tJ˘FLXH3g F`A05p\QjTƌc @d0ܝVd[ʆs%pYՁ*I1'61D+:YL`PP0t,BH(2Z&8 k2Q0cWO|fMy|&̣4o݉f\Cyo^^.<]'\P"o 0n섀Z VD#X,6i-Mv1I$<ǺXq(ϵxWI`}ؾFЗئ @]%{ʠԊ,MNẎBHcuUTrkrDB~OU9HRq^K&0x:N?{v|Ịɧ&fv}vd J˟}Rc\1ř>42=l,9x Oi!`zς3fEfdΫ4 ue9f&0f:X'rm8aԢ!űqVSc"FZ9-׺$@ 4/ůSM;f#\@߮Kaś2}ju!% &]$EA6vGJb-؈YBN&Hrbl`anvJ9^^-%DBtěRW)Sc{u`zz`ӻ1z f(2%+Ugd`Qx)ԝ qw^l&LYL |vJl{=L`q_wpvI@3sEFΖW~auGg5@VP`boN9kn1|g%7l>Fqj I3:yHRFRuHqr\M@PriAql~`XZBf-kFT#IV05bk`jd/~Rŝj*:| ޣakL*\rSr^ t$/t3Be3գ*zkL AEsfSmUGJV@uջV~:p2.ő c #Nx#C {j rTcՈ+ΚTRy)X a-Kx]P4 6 bFF4e0FB S)E3A#o߾-\3[-Mzӥ!{Q ?"h$h+8FI`a 5*(:C@]s e։@^r1d9`z_?t5@1 ݴD%xM /hnKƀ~B84Qbj`X VG׊ v)UɁ]k0 Sʟnͮ0H?Mg'Ӊ͢\snFyj#Xmul$ܦh1h85KƉ-Ɵ7<ѷ[k"Q~{H( W!Ymr[_Ӱj7 lNS pz>p0&~wb>NoL{ñ5d/ߝ:~}?O_3r(?O/F*3Iy~/}qf=>? OۗwW|fӹ+q?㕿ŏ7___f?OAWI8L|}9V$~K֯#^ꟓEpܾAt4c.^-] w׊C$)qfGSo͟4__{)y x4%pݽ _Ą!֠w+~"%{uh^M`wQhrIG/_aTd~^dzR,@~p\ _ ˧\$ Oq:OүMY~6+I. ?ތkGrM?_(TeKņZ~tMtnxV  \zӥ1o$M( yhxM/2&ƒJ0֣Y$i(r;M\09[/Rh"Ї ~vD"=Na0p8I'􏓇F޵q#e/$q9 8@K~8~$0$p[QKaݭVƶa{fZbY)ȻxAE}wKٝy#ݻw}RaWL|.Jr-  7\hl!w|iJ#dR@)r$<<'{ VcSۍӖ{PssJj)Bqc5ya4xPEZt-gߜ5ؔ!/t(elɐWH{;#RCJd(TӵviJ$,F.qk6Ҏk!66 j%Rㅝa9@yPKc(언ԡ̍"pr6˲dMҞ{<0V닭I|tzS|˗^{B$ hϫ0/:3. CiWRȴ.QiUjEY.EI8lNhCJ68X^1 {^-9xu'XR ~lD6y,>dkC3]ϲZ9#̆snu寊u)վj=>@Js!r*S0M+Ua;$9ӦEi !V*H E=PffjhfBxjθ00v3Sa oֆpGf( 9F-m̏Y"  #C.o;A@޻ *c޽?jd-kkǖ-JS8s.DύhCZӱH*'"0!zҺH 9__bBb1do9Kui|^<t-f%@+wC‘v(P-CI׼#470t:1YQϭ&E =DtPзGXJ#&d0kr.2+ 5Iq2q4dr@ Ae Ұ Cء7Rv {&D* d?Z@5<Mܸ0\wի]~~ZР;fZ|9@8Დ*vJOfN;ah5zqǫcǫ#n:y.O~s'wV`~uBsOf2#*rL2d,6]zfV'3|4̚I=ݡu?7>'`| ƱuVծ#ipIN<7CaO _0[ Y/xi2rټ( 1QZM*x>0 }SxypBbdp-=֪5bӭ#rV 4fc3V-Ƚԛ۸̻d c!$P}#kÄ^;E:r5yJ<{#eek# _oisjniԾw+qV _Ik~ޡt3LAbtREN <v4~Dtc%5}ר>tK j$t;A{DGfnesqь4 &5@+?"qȘv^9/Y&1Sg\2!}Hei5LY^Zc5">] eܑH/e:g$"8=idsP*o}o8ӻȺ%pʽ C[/hW\φ S܋v)g8(6W!| [wӿZ>]=S]_<3w\W{vwǖ/ݫJ~&#Y+3u_/:hR⼲wEESk4J C U<%d{;(i @&so'#ksu"]C9Xf'jg"fC?8|AsGpvCѧ4E@$W1>5 wLr 6\ֻN]!}:}Dl+ 7q&9 5=U0t(Z:@=Vrqљxw E=%*9μf!BH|,|rjhЮeJٳZ '`dIW]s[NGLz(`:6$8CۀѠs2quC3uc/Ky eq/B{LrY()XQ4ԚA7fB9"5B%>%9@цv҅Den$F6xSbyixcTՓٮ=  }7 `H3D90z*0f4C٧N\ :ԩצ_8D6n`.ݠw>P!b%( mi2.Q .ЩBVҔ=($*k!>3D->a|Ƴc?9iLpioPgIlm)Crō>([1emt]B/Kup ,&Z#X'68bz-v}uO]q 8Si{UNNx'fz 'Ni`H/oTjNZ(^TUML{\=gϴOq!C>RM[Q?M/%]a}4ټGg`3}es ΔԞ۷wFޑ<yMF*fҤ Y|i d1Y;41j~)>A^O9 = 1OS. "-brdEBȚ `L' YXHe␳GQAQe_ h2&efmOgόoT^>-ecKvxcq).E=<1GEt$oE;Of2NHDͬ|Ly9LyZ<o~EUrΘSVnL :(uبNю*w7pHr5&Hp4`=՜'@ AQt#!o0L4Hξּ'`J)Id6Ev݃:f9:v\TKJUG@; ; OɌDA^ PTB8ɈbRxZ= jg2)'fR&ZYQ.O;T) Zn?)fv_b?_|Kz*>*~zw7wyHAm{:S)%P*Ū^asEVN/Ɋh 5 i~ۯG j4<!Uc1?Q5}:a3̑NXb;K'W %P6g`eLZYpU@r0ݲ>(F.zDV|T};T1LQ7ۜ,{f\ލ%E\ՒqtчLByw#ty`F9GFş̋Z~?WmO]3BU]&XۍZs]|I6Jhhtb+~u¿])"?%ifCs^<, {y$x3i-Y+0 DofwMǼQ+1Hgx±'H `ҁ@-8˝ͤmH`Pɣ+&t%_c`϶:/^'PG6/K4cv0w]v$݃P+ pn8_t7B[ cj3_B#X@q0l8ӽҕ5]0/JĂ3&{܊6i}ˉKnF|q? ŧx {zVrjµY82ֺbĜW94-Wœ:x5٘Ösu Kakcۺ4Foh]v7l:cТs,|f \`=m@ Ɂ3LeVʒeT{i_gnFW]w$/K g=߈7>Ue57quCGVvaBmǿ0u?yE}\mX5(_Y炱՟g껫۫mP-=lX(HqWᤌ́$C86 Jg|ecfakmayJUr0ECm6xgWP0]Hj`Y{`a$ZH *hY*'.t1A!ZvڑN@;Υ0]T7Gznn8 q^֌ܓb FP<7,/9 )7e/$q. my.38fwQ6!㖣Kd O]Q8^Ϩ@Uྌ6sZBHcs8%…Rh5|N_h"`ڰMjR" \mthZ6)2(W\ij3Pݜqڍd|*V{vCo&v/+$:v@;m(;pIԊ#@aS.NAmkFaOr,wN3YIҰf̠+}iZkq7<qgORgPkɚ ޙqۛT߲('3/Ɣ`QQxK|59-8wV!Lgښ8r_aljwvl'ڜWdK#++QZr֕?hR!EJ3r(q/ݘhF5 _R<{}?1^ygvhA\5]R&=l5n{IZ.V+.Q"r/:Ivt$쾘S]N~%YWP3=:]ե|rIz*4RŤT}jpaטnt4R݁ЈC}*,%rgUPX hQ3zQDc0Cd:s2dRۋd{ZlX3ǎ0!lKHU8iI(uA^SK#ODID-dj46\畒JMVnA5{q݌R_w0ޚ]{&_\ںR(it̊"#fy0+ - 7VfkX`Ƚ]փ1z~ta*J * Y&w~Q|-W ۅ0VMYB :-d^JǙwtnHZpBBP[xfy) X [eiHiIy,̠,/Kk 2@wC:c?~+apXkչ '?end'h/lUpO&okk5svq| m,jJK1-,}‹V/ IE߱Lb<9doF9lvĵNySn}w07 yj,@)TZ/)Vu|Pe=+3] lݍz˂GTD u^Ɇ^4IiC1I^ṯUq;/ hzeR_ec,qz3;+U?ku6!mg\?_?RLua,Cj+s4Hf,2R,N(]V1 FwZ|6*mUᏽS$dޒ=mu~.щTΛKt}ٵҘj4|tLJ_/[1 4iprvݯY'X2v/ǩqjrڽW۽܌N/P*)JS{RGU )\KIB9gzO3ߊ?ߎ>\]8FS^}u'ak^B/Fq6 LgsceOGEɗ֔c{(-bxv'듦QKK %/u߰ȳ ]NV VZNeSH&4+ch^Ծ ܁˅")2B3eS$[{hFR^Rٞ:(a=Xtf{?,u8f4kԐoQ\Y*wxMig]Rd{ҽ8\%YV>n-_$3 M/CbSe39é:$MonPd6&ӡioS tx{ ΀aB2 (K䶓ok1h{ݞStj|NC; Kk'J QFÛ! Oy~WyJ/ڇvm'juhvK1Y$G;$k꽂.ÿ|w߼'zG|m;H+?ϟ;~}:z{ &jI#Di3 Kx9)%P &'W:Csl:l~U.SumK+r\8{ǪҸRP.).MaM(rH +t +\Eu+ )]ɦD"Etryz7aIJYj+m.|桝a1yk3J^[GjFUҚ?o?|`2U9{?#fBfO 3w^M7x}v$2 B[ΙL=N> X?]D*ژXyt~Wvo׌&$c;kyA"xCZ_Ћ?%^#VQVYQ"D`Tan>"Zm`#e|&F.L)mjbCa8 4aڟ6,iۮ)siB$w٭dyc2y/%old?Fm`ɫs#J/KKQ<) X:Ru>/KTl֝F~1ϝ&=<O,bYn`Q= }O.x$@`ݡ5t j>G*gQ6?Y|h,&vV auȺCsgu#oqW * .yR!4j o_e"`3AĜW4TPD~*^YdYixub-xn 3fj5t^y(^ѓ2X))/}I9m*E'-W);O?G"~~:Sc~t:?x: C-'KGK.N$4xMkPqLGZ؉;kpn;A8C :FNgÑH"й rR$Y#Kg^=1mAK൥R<.B2jT^or(ev"F30`m +<2*y+/-F孼~Um¶Vq|0kTak;S`gV_{u@1l{Ϛh%Q:=Sk3:$Rټ=3C䭁/-L?gRdi%N "tlc]9&s4:SS!UMU f% ۋN IV**fVo ےMI+Xgɠx\ÐCw՘0ҵ6%mK )jz(]̴R ! 1R%RxRj:X[[̘g%B$ bv A#vl 935.g7Nj)O}McWKњbԻ)GNU2'Qf5)ZдdM$MR(`Xn]5vöR=_n4rʻy{H4Ojb]yTټe@dAbkEno v!=C<hßzxʖAz}aΟrf|zfs8~<^fdz5xߝgVRF81x1/"J/!~үm}s.ZQ]j2;TfՒ=qZ&V?qU'a[vhZeaڃnwh0 DUk7CPxuv6mV=q$\Ye Dk@l 7wm@O[y\#;Vs).p%V)D;qR|xytx~%!ݵ XjR T^NϧncPPFN5rz+Ol2ՄǡEpҵd攤.,Tj0⪙aeWJ {G~볂̛jWnί1r)%Z1<=4xq'=LsO+YeEZ/X.O Rig֢Rؒͯ~S՚; +W:W.mdJVCcvkAi;G=$ƴ[=MMօr}LIQvdoT[{;R~5Qz%>oK~H ~Je#서M#nzeMw4[-AWDy<';=S}/o?xŇXJÏa^g+TxɓX2~Ui0+xtϢٳ*!7u;K\?G6 fm.RMZڂ\fv\;: 7;>%4{uEgk5y;צLJ- V% M,QjDf4׏~h)ƢRu+](܃Ԝk Z!z0. ļ Y`@"Pp>:J ѢM)h=R萲x6Ҍ|:xNffwgѧ$ɯ(Uwjt駟|WoN77SI2VWCv2PNWj SjS7[g 9Šhcj?_X%8y; rsmy]4MշG'O~/&9[Ӎ-O.nˌۛft;_wO~<<_S&-"oR.7|O4р*p:N78Ş-|HeX)LJxTڢ'!˓VH#PQ _}zy3.I8ݼ]$;˵vɅ\/d/ctw3Fhw2Y xƈ!W}wlw75Yg}1hf8?F{y7/&ͺ8F{)FW4 4)E=iݮSw?X;֓զ|x-{%ddJzTe+mPB+oف7;aDS|nH: QWVF5+8tɡsfפӵbaC8t&v*S4ǡ0tSXe+}QiP4ʅ4" VPo(WP1뽁(A3+@G4 TWOob]fMSd)T<0NcGp|!a⍧1J"VrQzk %!U5HVEQRK{PKmTf،|1!{Ob v6c,:;H2MSG`Ai(3Pkoxs ]'坷J˹î$>9xsS< [ v̟B̹A-w#)S# zQb+`^k;1f-ݜ]gf!1uN~{{6lgU4غf]+NzNZw6ѐ1ڶґvy .FWmhTJG5Zv?Nuw?n-LqiCꮟ*dA~1f7p] U}bb Wh=4G%$53.8CN<4*RT#HU1PVhx3`פ잚}rv}QiqJG9.`I"nD(z.hj{ͻ%Kg$ 3VQڤ,S)Cw 0(PKMUQH jk:)0DdNW1Jk$hO% 4(w&PB:FbÂ$) CGuqך7 @myAʩFM vnV D]-m8 QZW.jE#1(CwV60XF),ߏlTT/l }/.RNJP|!^7ڎs1{BW8mKBHLPkP5l_|1Zgg% :n=<@ ҪXKkQ1l|hhe`/|64DKgV?O!Z\C}1v9]$/g~I #kl6]xV/e;6-~mfY\ ?qZ~=<:7ѧBC}׏iȀVG6G^mٯ)y+_-W#P>q`quD*tyP^-JU=򪍠_!c(4F gch4B cfii؊޹Zlp;3[rYb\֬w.TzXN6QG'a^[dkd?,vQC*%Էt=Z$Ȟꛨ"r\*lU2Һ`i-΍Vcsv]?w 2FE5\sw&BK5m\ ^#JT P+&P^uؐBoh̀rOrnP ]^"@R~ɄxKe؊ l.7,w` *r}oWN+qU\aH,`4BMGAwSG(S %q@3QDzsI1Aܙn2vS4ƭpU38AJԃ,K/%p)\Vx, Ā %@`2$8ǢR+霪Ly($VpQk 8{^U."@d̆)bQ`m;+PFQ _cDK1* DZXæ8u z$s̉M%(;Ջ~Eq:M _gѬ5SY_ThO?&έM"_(}b,qX,6[z0rYݑMMCV\\^j4M ].ESK2QK3L=hZp U}zH.S酽%9SW,M{ClZv`, K3EXF'#/v6;tyNAXZTPB]sZ)dAlQ ]"fPc٢$%TRou[ԺÎcY󦕨~#d$oKލ[tt7zAte#93\Z<8-ax}2=z`4zm]<nJdR4y^APs{[_2|Ut齄G'Rh--%P?v_$A42БXF"|=Zco[(esƸB¦ty|\qY4u#_ 9 82þc.f'=ܜ_S|KҽDt}EkBbJ׈!a>sS{/2۩1D4ϗyYgM2gĖq# zr a3AO+.4+jĊ'0S=]>瞻?>Oz+ꅯf'_{3fvPig-;<zR,me<>=` gLg^<^-pЌ+ [ "u \~nVRf5u߃ B)s_fHA6:zKBeOؿWdl'M_? ٵGTK|`FM8: ^LQs %M0QRٳEһLu΃Meڷדϋ'WNRgv{sr~V}xMOV.<#Dw<s#ܱL1р2 FQB_qyXj[.*/XNkv6wN+L\8#|Ā!0z$'$ǦdZZ}<4?Maka+C+!L;Kqwܻ4i8M;i}w56BFeLEVTA=r[9Z1'w#77bjR7s '|W#2Z1L?MV3_ ˏ wShl PJ9LʔhPړ*D+tRH&<.fn B3hej%3eM9>9o TQutL E2*n#z-SXxL+]?\A'VD S zQMZ Xv+qP̰FA#A(WU S[xTH,yxf\8?^zd o3f>b(KlH1 PŤ<ꩦgBҊ@yMՐ BQoM(Waw盻/ A;˾Ry <ڋkP9%0B߿;f6_a980]fs;u~jwIm!Sb{r/8N=Бh&kܔR E-cťU:#IpTBC @-!6z#H-1i 9g'xZ?Ǎi*õb,sWj|u  D!!bC6[&|ȇs QPľOB|]XDl>DG׿ലFI(vwW[rrzłbkTrE5(9e#p;,49kM3l]XԶxa76ERp֙dѝzim`p+E~.a;1b5zD 3zKaѯ  \Xnzv1j@sum4;:ʛ-lJDtڴa~\1uq>Hmf@+R9eliGL:C(T~N;zqHP 8=HuaWo]wH=Lڀ캲tx%p?Ͽ;V>V2ĩZF9% v.`TeP_ftI0ƘÄ3}]cZ2E, Bh,_֚ ե["<( Y4d,W/rӊV}s2oJ"cʹ.~,9xq S7ɃZ4*hQ*ƥt6;Ze.#pv~uf!4_P<9 %9f**CkBMs6snP[/.i5ޞ }*>X﵋&z3 9cMu6as[S^8e7 իT\ew$屁(d4HRCdV l55IRi7Ж!|gqɢai3L4 W1ȴ[~ꆣA39ox:R\#*dp)ca6(Ŵ\9@%3*([#+I9 DtZc%QԊ+ FK-cyKEe:)%XIIb2\X `3U)>[- NI 8"bȮ8@sZEsA0 Fwe=H*އ~`1˺Q$ݲU%U{L"KѮR2#FSTq1=lbJ$XOb@1 BG BBQj a)cs fda )3kE33'r*bFKU U[Mj{ Fm}/h!U/7lQ/Tn* ;fCIP!As1lL˦O_F&]-),@ۃa,- P1W=erpd)nOT~|6lYF;5cH>v9S)ח9JyXvyy -`"ix}{ 0qR}䷾̃[+|o"Df:t;>}^E `eg uD_4sظPOd@!%Ÿ41z4Zƒ7+KZۣDmڛ/};>/Fl241Xm5S|X/↋u"*i~,?5bdSeyT2_yUNX0z< L8@V4~|0ΟDN|ZrokbmW1`zY.V3RFq!K"vTc&hEF2ҝS `g6+F*$;Î̠"ٌe :|Gc*c@TU NF(ԤHԬX2HJfu Dt^ AI.DB !sL%27:5-'c/qV=4TĿەʙ2b6M288לv֐9>66dNa!oD϶)N=Ua_ B3^ BQ)_ | B\[7DlSã} pޒj|W鶼zY6]q]#[ g0uȖÚP"TxcXJK@[*tXfH Qи?޴&&af߭#LJ\b| Z$nQVB͈R9c BpA@B\8fgN!e VV*/|¨JIVXˊqk:fƈq$FZ;7> PY} yҩ#i vSwKՊT)2| H7qL_[bK.A+$Q|a8NkғA086y.5Ɵz xWbo2z7҅@th<{PH ]e ۲w>ySk*t"Q5 *\L.y3S]YcԌRyM$TL.s4Q}gͬZQ7`,ngsTE?B7EK(2 v0m67 *Ngfj " 6>(rLվ %XWpSenDwC~JC˔>{ /^5zF*yB~FɼQ23%;6_LbzPSю5kE [57׭nP\wa;k>*dKzRifqĥĉ3X+C[V{]ck8J3@mhp(v[ L0YZ索!Á2cslrQ!QAqnQf& q)F&<%mS!6B8Gf#Ɨ \ABijK&J9NC('3t.!T$l pRD}oTĤDjT Q"mTTH>R hԨ+u7B=SiVb2!PXyc坏w>VUceuJERCӺ(pY?j L- eH 2Xx֕j]$0<$ ᕥ2i$2˔1 s 7Q(a0 ++̜̙EN(*&2s<V<:gdA#:EAzĎ=[$2uÔh$.W&)|BN79Ɗ3|p ($Ǣ&ӛp9F\4^ˢ1B)-Ԋ}CPiOo7e%:Hk!?0+ .yB!}ƗbRLLb]`n\4@"a3!(7 3Yk#`j Gu2,I0+/Mc7F#@ uk ѩhJR*XKá]5R1p @a$ m_5` :5`(k* 3h ~NsC!;]ƁPaߣ)ΣFlyZA R #A;Z$Uw)z ųVJ3[kǶZ( 7_i IHknZ JI\{ؚ̙^ۻ @ iԼ1›Abi{n\ZS&0k?hu{2K vO/c;\wTw%<4\i)XrȘ*^oC+rBGFփ:7WtM:W}:.>ԧsc]:o|Qg3m)εytvǸ?׿wh|HCJk]3˫;W`z7Ш&D021|sx]85fJE7d׬p38%ɻ.w:=kƣiw"۟^}'SVANU@D "z`vao1`ܫW#C@PMú\Ӄ6Z[19| ]VƵݔ%@\Nô G5cu)!# (/ɴ9J`osŭANmQR3q)hHS8Bp,*A\L [F 52:/FY#C]N,Ҋ32E%.0Q(˩"f<1&@\e<ɬg,*VTJJ|}| usXPcεo5ỿOE9>n [ф|RFv~M#@0#c4qtyJFH,=H8|;B\S>DlU?_A@xƐ!WD(Gh+ )(RNەZJf()xn0SъcirѯK' k/?ryjA+ ʐ}~ K }onݼ}{r,z^3o!}~{6'0],g;J;:v_ ye)ܷQ\}h侶)tRYs%{+k˲(g5zڨ|AV_S?(X((m Zz 9WL)}[\)}Ín5nrx5o e%Ӌ7M51ufeNTov,?_͏%n}/7ҽzW)V}MB)g걟!{bbA˨V |IaBhP <4qޯƒ0ErNZ+~p߹V1}==f_h!%xFٯn6/ (-.s'pW'mUt֠@ֺ3}U0њ.j'ݾd$]=֏3ѷgG<ۜHf&d;aD'7_;𕿬_+!h{uWJC[3] g!_Cn=Xg]|%k qC kؙ xouk]1|EtA8UfjB>"0e.Ҥ˘z>kTXp=e:`b#RKJDő kJ!q3IiU& a;b)e4V6Jl3Xv%}zΧ-'E 9N AL K3nTTVUm{k[ o~1En͔ O:7Lѕ%tk"Q'JudjJp[P+k-.mlzA`k,T_7vJTN5UhBҠz̐41o81LUr%ro~%ө#uwm970qD i4\H( 2yf+qHLQy @] bY%fIdUYT ģEddDFTDb#`ϘsKׇc܋@Q9j5D(7^b&H{r0/ӯ:~ 9ztOpkm =Þz`$iڛ%5@K[ e0Mk%n.GkOzqw. TT7a4Df_ %>zq(¬nH_l R4X[R h:*WTgA`3Zӻq*c|2+?iU8vK=FTt[}V5|>BI:ClZUr})ZΗ[ﮯX܍u񡚻@_[ On[ Wέ]pK"Z*o]Vϗ)j/_[J7t?o֙FᨢDi-[$bY5'"_mfb!WlHr۹CBO1sNuxe)HWnMɾ @8Z4B"޽%M1F4ob LqF*$. ʸ{j .n~&$ DqZ$\'?-SK_yտ\]Ge1]/^W~?87o"qxR!t۔ 9"s9VFL6\Y+UhD:)XlƄui1a<Hl, qUJs 4sFa@1FNi4DcJP֛fVPm|je,6&_ngn<<;;OTi.?h*4HA[}}p4_00#\HN%;^_ d:[HȻ|o7wf:v4M?aO1\G["<ֿ 3%DQ6w.q7*tq~JyȜ"[|.9#\܌=.f1#j^%mW :L6"gJz%uxcF~J2,{65@jճ6h_]' ԟvhP VJ%9TJrq.MrX_t/ั&/3 70CaLH_*7^9?.W3+I3 u*Wt,LbM쾇y˚sty 7x$|8' œ&hN&gd#+8g7@Gj{=Mo}?7ww8FK "Z^8,Bt:|B0NUKϐ-$zm?ޛ/B<ր/"@ y⁂?ކB5J*"bӻQWo3:&K}sm7Ab؆*JR9pTZQ%PJ"y=x;2FctLàD׃ bϞ&!@me9}I?f#Wy¢ &03:=-#;DjUu:Nȵ-<_%3"mk9%qYcMHrQ~4K1@ -Qɍf['B9fZ&qZp4֚uj*C#ދ\KР"x1 tKRE3}R6;뿰 FPQNQ ?(E?FYfBpLxAR#M@6Fvu?/pLofabR?iF{j:=G9!Nk0Ub:)Ŝ# #&wsU]Vupꕢ[0f)RQ/wDs P-S#D:2w`t2И#zkk,0,)U5JS Zw̦N?Z/lQW*E~R]oP>yVlObfT.<0qWQ":)w_na mo1XYKrVYJ+YˌyaB4磃U':r*nOgfO,}`4$XKv CHi QI[ِyd<<S N?<W1F] 1*"qﶥ++Ǹ65נjZ;9D(geˤNY3ȊV@6 sh= =E8j,(xs2jQ>`UVR:YuY<ܚ9y!z̼i3o !y ז߷ή]IhN*{i܉+JUϠQpTupZCZhn=ˏn_/m_s]dcIɨ srVqO%Ejs|}*țAU(JTQĠTFkb+J'\Y%:S"RxFx^VMVV| 6p@Yeʸ&`,caya47 ͐g:T@{ĹFu=_bilM0U#W8F9`k\FaGm=vMkwʰ`Z_P! hi2d.q͜aiBT}{<:׌J rx L2)dtH%Qym; R3 T"(%>mr:B:+W{lc Ebdž)D*'"_J}s 07vjl?lU(=~j{صz0k2\q=ߞ`g^ؼn,CH.'?}g3uXO\UA[j@=GG|\6ydY;n@6cqOx` 3۩LGl$aS}T"Z2~rWiqLX~ȈQXN\JFcAd~G'ϯnɨ%ɼGiZ5C\:# p]\\50m=ybX2BkHik߳(Vbji~MM )Bzw=U1p$3>7ѩ4v\lVgs. C}ux?~Ϩq_#bOIDTNDa=TdiP Ow aʒF4kMCYr$0Q$aº %2 #@5~1zիRN3kH&,gr:RAĥRj<+!uF6Qt:#s%(jr+*q-*cB: d0 Rl<|C+(A}V'9\HCR"8@^Hb)e6ezHccQL4ǦԙՈ=Z4m+s|^%10'6g)HfQi0).tc*K4"2Xz)d>|!^8sS/Ck$= #0ư- .hυt!{ )#( gIW,r++UBf OHzQ?+(HK5{&SMҢ`,-E5!2Q8X"-2d-%ch֓KH !ƀ wq ?釣t'c:p4lEfމ0@<8x\ג9Wiv=_8U V]u0R^=M'h?ߒI`6ݠ +(־KV0]^X-+ y_,;ʲQ^:ق_Ejg_!ާfe!q&ęRcsTz8>9 9'R}8qHkZHm2ݭPyTcxw~t#Id EcMx̭cRg+ruS}7KOj( IBr=]BAbi $jRƭNqG6HǛ7Mҵߑaڱ@߅f>AO62z 7VH us~|jL{=L*sj`RމRh}_Z$+Ϊ@.xP=*~*QT|j<(dƃNh/EnPpc͌r'w'Bﶺ1)zvģyU<|P?^)jQOuϏg gɚ«WBR rLF::y=&V}RaQ%;?fzZ BK*Mĺ$ZxZ6LQDXgAm"b$#30χB ˈJ&Ɓ9ddX32C:UP V"GgH6'喀Y+$`qx7%gKٻFn$4.'],vdK` mE$Wl%u[^'AflQ_Ūb=vq.I` 3"5HգϗnVNZ*[NmdJN X&s2l" ^-̩#RoQzz^RJR4- e*jA0zQӭqZeekBi.r\jILE,NR Z` %2mcD1eCGk=ƓSagr=h__J֐(-G=& Lb{qRs5)<@iMF c*NF.b&TĈ4k(hLy>G{盎=74UL`|5IһҭΕ3您%"{.j CɳK-ֿM# -X3B#IC6X9Ňd]?}5*YxsYOSHԭ0A5]`;wU9 cdZ5X(rZ܏jɴnkFSVC:hbK] KoŤ;S̠u8pDIWx!*e}~!=sn܇7[XԴ 4&`)t8bgim\vLN~Rl|nD/s#iȈ. }|xm&;'og r "&O_BOݷ.|\w}I_UyZe-:la%zъ] p4"ۯC왊K[jaZ7*oN`JҎKoE؏͉R{[cC9enun(c.Kc˺i@ $N▀-5%֖#i'% z<̰3 7C0%PldU+8Z̄;TMe,q3OۃE05*r;ѣ9g ۖm:}sڊRS,jG@`xkM`]ݔbuuiVuciA:sf"Nyͭ5>sv?,W(*dJQ;]gyqu'Q6XpceÍ 7l=!]&81q Q:2ј4BшPŒbٺ#l,%^Jǒa0Ej2 Ƚ,8-|LS8_mDL?sOaCHP])5W'Qs8$Z|"dɔZS(PYx35IA`R E }ȢnhUJjZ=JrrVσ)!;֣xW[lJ.=˻8"Q~lW8GEqj.h6&aycX0<"SRJtV[Cj),GtߛtP4feWs'$y~n2wڕykXWE4S)dhMl@,c&=QK>vΚC^0Ϛ M'uK8If_&~y<ZDh6P縚BO4SӇn~ ӂٶn"1NqD@ܾ"zL`D)s6p+,Ozel2R n"Ĥ04c&Xf[H#Fv6\ U6iMѪxB9 4aؖdƬ<20 ꢌTsBcMxILS$ego`d+2DmBCKqpk'癙lQVcF[HP=%z1.v qe e^3Jivv '^מ&FU #X`ve =| ->[58|暌9P@ÇT76hC^!w3IaB3 k{gi91 9*G]鼪d ٷ6Z>|{|srg-$bfX[_r%JR*cnHԜֳz\T;ܗ[qG0$:S9YҖڸz".p32 5/K/Lt; h\$~b$dت!-^?(?B)cOᣇ)|c<Z:3Fx e#̐dHh))C(i,yj[j||^GVjX9sΈ$vwvt,-'93V[rG#lҵii7\iΥ[&ldc/A8H.oxqt~'n 7}Th6h g*#t<`$ZkA=Gf7 gF X( 0 uqBfiK,>d毐&XɭIȅ0žF@H$|$F|!d(&,( ^eVHxpSL)$γz 6աMR7xsNLa@hRфbw=}_zz? `XɱL5qp1['p*Ňv#䬼'##NӎB.67w=P 0zJkcxoʵlŵ2bZE?27wn~Eλ}2(^f?ߙ`S(a6KīG^4 NKЮ_+&8fL&gBI!]NpEsB9+ʞ)ŤG+i &a5ucJ>E۔u Tn<*ԏ;kf?3~*.|p=$VIܫ|=٬Z-suPx,Am((}WtD UV~ `W(Uؿ,WA`t}/`D8>JrWVDjV*ҟ20* 3vfkɮ }BXϻ"zX.nۓ`70ъ5gz_r')bNp1H#a&0[[i[V$ٝA%.,բ ۲JSGgS:eӗGm_..̏C{_n=dNGKH[7}b[YiR 5NMijD-T`3ÆES8q*$Ѓ'~>/Ŏ6D QrkSrtΛ#(7(v~Qsux]/<El`#/:A/pMzᑅc=/'v T *v7ev?{y?_"=yϊ5@EMx[Zкyjg,{E> %;4,P+?~&fѳm<ߋ<ôbK59G&}4Ÿ׵uv+@[ݪ(g7QVyp?p.c9jks}M]ǯ -{.fw>~yǩugwTsbX&$ n7CB P=nKC!"ՀH10DKB1jpcE H3%<͇,4 u5x7~FcR}PA(8@^>NCaT?Y5y#S^M$."[(4ӎp`)5"C@lM穈ш[T|X`S<`J ,SB2`8Kb!4HUQ8;A)v@%ڥd\R;tTʓEc4|x~v $$XZ$HLhY 9l9#@Xlf ?GHMTRl45paK%RHMP"`U^k9N*p uBiGVÌgLUpu&o:QYwB/^=֔wĚ}2K* Tg!Q?SlNXfT ŵMLK(4Y:qvbL[@ k_5?Z*?-xsP#": YF8?B꒬g h/BY]B[{|!ெo.0 '88k>&j|Ʈ,ľ@\V ׯtzp{}*A!8b$bS& ;lƲ@4N)1TdmHp}~uя\DY= ׵Q, r'o PJiX$kgLgesxcc+(~vu(kE ]Zu&hkw h{ط%"lQiia"hٹXYܾw&04vBvwc9ѓmǒ#[Qa+OpF{7ڬ ><'/\>ѩ|$5ОZQ\Hdh飫 (cr:q&s~`Mܺ&iҺx8e@bA+1W,'7JX~Xm oijƒ3 /~vCN Xfm]buD8.hb>Kjxsï|;2?wRyöe-uN֞oOŃl7'd 2x[\ۍl4$ QRQXm$cmSh[2j[m^mCGbx5Db2Ia<?,YE+LC FivV86C @QXŖIpՍO7RDH 'ݺK@tuH)ƀ服EaN!d1;{romsqy6dG(Q_SEc#<щ?devgU;'w؏lʧfd:zI>/Kd+}=ISk= .}'vTJ^dXN rP:yПT$?OJ{66LʆdsAKq()0YiMS'@R\ 9_%:IL+S5Wy{6ޤ"sG#ksOP5Nm;[P=PXf`="m_ `.D%0ҙI^shvL9Q_o>wa C/2 (TV)I+ #~*jJ+ o]2 ZX@t $4վj(7k-+K_%@!YʄoBtj:(TXR#ET 12k@)qecAx!pCgkFk ?'TَnN"PJ)F\5GZ4 +S 9TR;lŞe. b|*1[sI1WW5>[O/А&^ }{Ռ:Yh':d|8~k^瓫KR\n5ޕ''dzES<*Ù,lg5y|. Bwö>j^&F $az[,4PCmzKa;ĒaQ2?y"WQ1;< U"73B!%L1~*wBDamh6:U%]fwCnL5N"4g@(h#HVʙJ0^*^.ZkofPpK>,J,Vʦdqr\|ϦG/sy\,mF5ʄba./r;eB(~uO5$O?f8plA~=$?*]:OoAO\P$$!NE^=|ـPDZtB/:3ZIlbA 0lӳa- CA\>\^p@`gMգ@%>1Ys}:N?. ܗ^pZv?߅]A3g nD:|;o &(^AOGCzQ+Y @@[L*UTA?(T1KF Cjn:}tYu=*C6 ;H;?Y]wҍߍFP,R^\|*g]"~t6Z TQzvRA} Soˆ^jJ!^Qk&-iVYD.qn J X)KhuίD*#+NQlIBx#A  oj4AyaW*fJslOcJ4{&ۜYX0B 7V"p 1X*X`˷ &T$-"[T䖗RƐjLQ?b̵l3+:,vc*c{j+Li!6&8$_$vX&i8`m&Bd[# WAߧWHʬQݫwKY#@PvӐE6w$|5LiI(1S︧^8x-CDy6!c#Pwk.fx|;K[Bk`Ҥ 9cK&|`wf){'J@Q np0DҊ`夂(R0%L;PGpvLLkͭ HGXW cZp vl"Q.QI;3A5Fe3W|rq Y`1)K G.:`"t |2yY ; a8556WCz^7gTVP*֟ȍM9gǻWGn4*#ZFu-/Sy zY[-GTQ S{rQ["Px SO+rKq2?dާ6,EFν'GT"8(tķN!k)Ñso#۠w.7wf rs(Z!:TjعY*R/ͼCS!"kEקN٭L Nv*B4и]o. @-Hީo#R*wFqPw.([W*h v>AӨ̌z =p6{/N N_v77IfTr3KlZǃI  aRn=3%êm*VowxԪ:+Yu{_z1'KϏPՂ- `>+K;q*WS ^a4~}|3Tcd&%oh nD-N˜CA< m#HN#QǞ 0M0:@=Z]ai B9cCΣpEf`BXR2&b4Ƕ,lbF4ɧJ|zEz_d=#&DɧiĻld˳M,\#a1HNZ!|ls {ր38(&Y{p`u˞UgX1{afAfeL 3EeL+eesVAs:}*vs6Ih=_ !p/\\X2^]R[eu2YU~Os#Ӫ,?cLҦtD׷e9I/#w{I嗳5\7зdm.uАKX5NWGN[)DP#:$!l6ZQyCjև8=H$=y'Xfy=c!AC;o LZdkzy2|4*WY;6X޾o_};رF"Ձzu-r/ˌY:\AM0^ TZBlI< GPD!JkrENZ2&K:\| 3D#q+9.jͥ _ۭ—qlupsؗ8񣟹-yS,r2S"OC~6K;q*k~gaz3c v4=:f3_!"s/]#֫WT9dU-^]rVGYU-訨QIX :9;a}qqyn1֒@j'>&*1{8I j߳8:eZb7vtSҫr:9Cw,CSi@{~[4D,Z'Z,`8I(s!Q]zO]kSIYbMs?vIZeWkȎAw^+S|/Kr=+zCt؟݉n0 ?_tȩY4MU,޾ `?I U8nR=A9m ŀo@|p(E QAܛNJ%V>qZ+"ݥ :VEw+ -a:4 uL)g[0q,J֔#9$Sݍ P-)ZFMeVf29ApI4{Sww06S; Uj>^S%VY6+&{=%rHnPro3X]|?6bS"{5Kv}ӫ b3/=fT=gg iR*we.eN"rji'(=6‹ yO-x7n(N!AuP *GY^-Mg[N᰻\s,8η0'ZV )  CdO'0ON@q$*8Q(Ԁ W+N wiu Ψ$.fcG/:^ʼn9(IFO5໤oRgD5 ;%F%r.R(Vk8Jit|楥RJm72T=X-^jVjbrq=ӆQȂA4Qt3ѹ LNpoA `~:X:\1ڎ-%baZV{"S]*K f#*$]>q.Wޕ}Zad7bLgf>ҕ$n{',ǜ3;>f~hW\qu2䟗ү&vFjPqΧz6tױ?_o=%O.ywuOzO/ ̨W9e b&x"j!0Lbb6]KB.xi4o9z>ZQpymC7(QfC5ZT->G+o~bM9p@^vW tNLtJWAZ{5:HQ<(eQ !!"uakj=] qdW,hͯmty:܈r 5L5≄8MWԯץXG<~V5#H?nmp0H3r(o1X,$΁siO8dd|5߽@gԌ$_G?~CVO̓uLWW`_U7xhZ80=vtSB<5iR{r=M]P"F:>Q7aj8ڔ(v[NRs{FU#M/HD2VLP7d"$s50W}X@"L0`axȔA|z 01~;=jsՆFq R^ayڐ= 343Θ+j> Fh'e'tM)yb4R ;o f)_j^S{~>3m3ޚXz1 x9xM&GQmEkOR3VKLMb)yVJ$MRGaIz]j]>i+*Jh!}p'NeG%z1a_ +DUKw%-/D=+ި%C Ud@$;B|[ u9,/.lP' BiEN'0 t;דc3 Qy ƌO9_#8"1ChB QK,I.5*s1׼[`V,>iX8O&> tO"%&b W+ņku<ߢ  F-+:o$x ǵց)9(oCZ0ǸIb #~M:˼Є1Akٷ6Wrjsd#4PŹf đrQ.$Y/0H8,阉WTzc k,Yw6ѡIpY:8OºŢ^66A Neq/QKaFGPzGOծw#(M;F$kHP4Zfʵ:[80L(QF ]aB cHڮfMmeM=VWh9hmj=n!ֳޕLXlikښozX@mi{/HpdQoϒqxr}jP_If4mY+&TDLSvzXCLӕ%'V" t`㱌1>_w{ۘoޱ>;NOX@)׉wz t8Fz4(~5F$glp随8]~"zݤb]Sj<QKX? bX0wRc#ԤqkRmZTAhRmZڣF=SNZzZ MKAh)6-Q>y#Ҷ?SutK-.Kr)*#5ꤣɒ)`}^K#Pm/,ҋXrd2J'."QB,h2|If幅sFo!eĔ]8V%$ۂ"eQU¤YSEmfBI& ag م[O9 T™A1EXFmQۙq PEf}QU'{WMl}-:| 4?iQj)5j)08I1̂󗾬6|B.S㹻IG4 @kF~wb\!  6Fq qZ$EFb@sB{\)XngI5ϟnl"ZTxq f.l#_CfF'h|}҇1uܦͽހj #0N쀯Ӊ8DP)NcmtulV:tt 5 ڲ,+ېxcOD|  +a.@炋Nu(goa ⵐS_u=COZb8oa]F@ך\IC`Y.MQ\#e)篩KIW^TF BǞE 171)bԋQPܕ쒹9.k LuJr3G}q#JVRNF@3:4ʨZssYΫ$,fVuY`ZFX`6!)|EN}b[RYz -ZFOB*/]-cIyQ ]3*A& 2 rR r|t&%Ig_ZFXA|B!s\PYF Oŏ{ wKHGⅪ ty:7hbq5 ‡ 9q9|fd̎3aUF1UƢ۲$+gYB.at2egyfp{ƔUvԒ=ܶ>8#2[]LJ҆}hYer*ou,rEA[; JҌ%&/bQc)"8UM}Trhb#t2RԟB2naл̄%)JY15faM`|:y,=JL<eRFYBl"!/^kij9u[6:zzJ9"(S% ^DFR^0eh;ԒUxZeI^Hnz^$)Z(m*SG Զ6@*YyrXc2YNH9Cj-D#P[[jS>)*uHjpv(X86! KM`V(Y&8&@x- G4Hd/sڰ9 ~,ˡ[iHr aP}H3IG Mqqrp!²Jڏ@ض/F/؋(rtkYA!&cEA2ɰ's<B2E,cLۣ& f[69ଘa+/$2B!ٷǂDҔ!gI"rʌ|F#(bс$# AbHJ;v;Vm}FcP*hpF-Tq=.H8lei=jX-mHYk Ns_2Ǘٔ4ѦUOSrz0!Utq~pT<%׿琹?\(}]4&ίx\+=襰ˏgbGO?\=\pG0&hSAחIt*kRk?wEN|Vs$5=8΍0C_ϕ6Wݚ8!iMFRr>wYgZ- Ŷ>v9a4%s)X;eD!ΨAPL1"т9$BDH691u8:_A`mkӗ͈y1].UK(uqx,ts}705QhȎlcVI0bPo^+.چR|I49l/5of7}`5߽w,OmΣI]~L+޷O=EM2be_ɬJ.{_'D2X@$PӳL'A_vP>!-a/5G&5.=J`Z ۞'Db`DRr 2,.ڿP!BG0|N'$(1{jŽ,T@xӄvc)SJ :\pz2֛ y#?~_<̯/ϮCzGb^S`u|籼p3 6X z.ߴ W.b.؜8WW"I=H/ƍ>< 5S 0֬80{o8쿟f Usl$z{SԢ⬟B}J;wgLymiѿl[>O_>Z-j'o1a-b>Q}LX4_-1d#s?4s}9dQr6&H cQ.zY/;P@.__LF.3W6JUGf5Hq/ʴi(ReڴGIKZKڴ`&h=H=_n٫mby9ZzS"{ ><}Si]ի/|Ǩ9Bїި7?~~POr6U U( ?׿Jz>]tػB؇ Ǫ==iǃ"QFKxu4]̀t@0B+jڷ=|]mP[UVG|?Վn3VZavh9&b#Dyⷍx!N{+٤KFtb.m~憖l!ں/)IC-rgKNhސMt:l_NcEE>ږ'6u6lV`4pKj* WCAynkuK G@gD5VFJ.Ea4TBrSq#Q87pZ) q8&#}(F14>ԘxnoIM]n_c×U&3KRIxtM { VN\ 7m>!(CDN"%05K'@NYE K_K+Zu܅,q)W'ٺWw,V>'QNO-bLWW9=en>r?%; 81xp"ccp`5*vML.@YWy qw#P2nҒzE3Zoʍ2c@F΀[¤ZžYT>k4 "kiPrPK3Xe؁XkXԠMh ^iaRys\\?4!Hщ=qV!$WxfJ!%*4a;\w1 O6~蝀S.S'hr':.njӷpu5-\iõk1&.BD/#mS"Ӯô|~:b|yX^U8G 6΃"G=Ht6y0 !D*'\ײF"nrwX-Tȸjc&yŋ4#^oF 9H>E5e1XG(fu'Sv+LNO^'vSn ȳĽtC k }{/I 6ln5^IjZP%+65;>sgmFPOʫ; IOO RʷUyߞ}jiru;y_qak6]xZeQmqqIenpC<-2-0Rx "EDkoD/r: #UQlڵeU@ D+#!F$! [VI(f {;0>,`"9uؗCl4ZvmIgl[:mXI ٭w29N]8Ul R %Q:,kb5B ',˲DŠL-j']!-N.f`|:N;$!9]ɭ{ђV{8㐪unKYwg,`7P%Vy=x W$ڠAC }ͻ1R}WlGҨwD-;n^֞nr\:RXJ-fTusM >;SsJM.4mTX{dE@:ia Ԙmj=8V؀=r# }|M Jg g+EbdӐpF/.8-]*D .I2{TkVZі{]#W_2H+{4Ini #.RR9va9*'*TTXv>l·K$p 9c!BӛO ۂh#*.%jωmps婗 \qas1NimRZTR(FV0u8yۄäQQVZ BT>ZǶM٨tD%ɕV 9WpW ²|R#L\\tSTͪIZ-ARkaR:3@uYn9X^,$us<} iZ54Jb %;@F#-g;#-%S `,hhŽ~ԎӐ4l4Ȕ 4D IĖ'6qϜJFuw`/ PZd)CP"&J3SlSZ%=i٪VD*)RsNTm=vrVYL\4qG4 8)cܭ pB;%K chAޢR:nZ+P8TCMvzQ-h6,][d^"=8ԡW˦GZX TpAOEMFJ>%Sk%i/(Bz,\ں;u9pF0Yt;.7H:#\0Kr I>z ۻYXK.[pN2H[5}PxV(Z\>/6@ Sd1Ȩ-ؠ} )yk snJuϪFBl竭oZs ~*ОkP!ŝG] Nby*adPό$j izhV!QN75&Z8`lց*aQRF&'J=ak!IR:|g-{ qўwFK CE'Dw*"hT2GSF⁗Z_Jo WhW"!@PQ za݈slMa7_TEXn(B"I8pJl \1RA %iakTЬ@:>˥eS(hę;IF`sbdd6k6T(w))qU9.3 2JGĔwAEȩsZBk9jvH6Xe.|(HSlvU|/R8蜳=J\Q^XQ^PGy)}h9Jp['zu%ja^J5m; 0/űr{Rit̻Q@YP7.U{9T+M @od} ' bZp0DFjë<7dm֭ꥀm} \/q -;-Ғ*tf/o9 A6k.é}Jx}l\.r͇#&a_6^W[Jj\hg@3-X]_αGwݹf*M`fkݝ2QZFEu?*hŌ s^|&7$ 44$W{_nWzxݫE zzKO_cOia'{"]*hL ʰ~}HRXή?I%[,q ۭο<\rQ_3-/F.(;.4 巟?,o*Щ>gr~gl#;~m}d4eװ~pVD.Τ~E$gJ9+_o4^#LݜAAX~?˞v۬$ sj6b,\,Wy>o޹EÉ]QKl)ۀ ZCZi73-n s3f?G~#?H pY&G~V}@4k#G/A*>8_6X%w0ly,jF;9)r֎kXV~dXO{δɸf' R )?ϋc)C/ (\e,2SpS̹ #>~_< ƻ&;?o&vп fk2yޯ@}0oGYws>&e g|̿o?! ++ R!d褴b38n򟜬WD$nkH@BZ[`:ZgT:Oq1"] .2y"@E!@rz)+2 `KuM+̯"f l| fR䒍)6^pY yKOEK;7R,--J NQ=[$gad@ Ɗ' zj% GDe~O!iyz2j[53tb܈ Ηm+@z|@5qݲ`@8uHf++sd|_$r"挈ֲq6;T'Ɲ27vJAKyÑ44 UA/[71 VfG3~ 殞ެ\7wq3ֹi?.Np>Y7_8wy_1$;WOOipyEF!'sp{Y(/O Gj=,N#pS?38pNz5™Qpa0)%$ݣ6H0)&o>\vL^H5>X>k<n6jt>+'+?_BQgf.8}8f=$dnzOQUigO5|wsX0EPO_R;tE+ 5,0ǯKUD>?.>]F/XZ`9+}&h Y8IvnU@Hh{|~g%щ?yi7Z#CCl\sP{8YT@Q"OjOKD%m5`qPf Ke yfʖw}:37#l.7L NN[?uv7B_2g.O+Oۿ|1)}zv<耭^X}v.0r: RCK_qbcnlZ߻؛G];3]yogwk^ܵ|J,3So o$Cmb??U>mv zW{y@CX*FWARn#I8oiR؝yŋYԭwpGMJ},p(F0=AP ~ݔT LIAݷz@P^ugq$L3DPrXT r˜@"GQ`@I\ޙ l'ꥬJ3dpcm(e7[ܲ5A%r4ӉrNʕa9Qeh՞SCETRTXRCxN7 .rZSr$CلXV<2$5#ܚ{t3[s:BRUq Kܠ&,$y#:pCvi#ARBoN ;YWe%C/e{15@V.Is;>WGtաg@a/]Gt՝ 3{DPJ"Z`c:&Gp$sd0˻J&c`'4S]jC-G^@x3*J45RGFHok"@LB9኷/-zEњj ʺ5oh%g)nSFk>Sy71f ) W %﵎LI#E0,7Hr" IF9QнDq3*kԄPJ)Sˉ4Sh0 ZNwTl}zB4 rYKDV9xOH\?7IR&.ĴT7Sw7;u:4*`pqWLCPcT?΁|Fw:wc^7//cWgpRQ.k8P tRAx}Mkpc*$&31mu3^.,'ٺ|bj" L\*ju `S[iDVgi#2W=Y6(U)ˆ#U 4.+<Ѡ j"֞O4j4 ߠsI IQ҉`;Hiy֕pEO W4VZJgcI 6jBqaj8m #޹ZA`ԭpEl?6DMȖybڣvMlj+JN2Fʼ`Jِ v-ƐrO>V.,*<|?) VEwi}9yg^漀5/O$'yzs+)zgpRU(hOzdwl )_m|ǎN￯Z )o0&{\D%`iP*Wǥ%Unb& 'Dp{+ږ3w;G!,ҽ<@bo{ &PJPE_ $f=1|Y9cl>Fnvmjt|?us|V['7bx_wA_-}giwӧGw=->[y29!?^I}vtϷozz'N 5&;?ͣwuw_ μw%{G7~ۍiQLӊh+0 !!WhjkvkbOӀ!B$b!3Zief к@44pB#KXbrcb]?ú͂_j:AHRNbqNXGTVAX "d^J* ɈbJyñŔObl=2,lVTƘ˪FXBIRBITkO/7YVN9#`ՑѫpQyҔ^nt]>h3ȦBcd6P&M7#\+f׫nz*_ecUޑ U<_*{IK#'RÕ)z֢:!42 P2zi)kFsPv4UǫK$|t; P׸I(nz 'Pm[j$2.RH)%RѻrMY)A-#|/QRj%QpV0 8^5Pkz^U'Q1*]2b10#$'#m(09Jy: rËY{x 0cT#*қ%kB4^CR'Z脑zŁh}&)iڄu \f5a]/xbKwt3ROB!`*~.V/:F.f8 $n;dCG&(b۟Zz5a6ic^ZYc9xj*3T r#("=uXiy$\ǎƹ6ƹ與 t/&4fī]KAHկxpBS*Eۮ{N{ 6ΰjI:cq7o4Sȼxol\,V l^2k&?NXۧKAlۜTtʔ?-m~~en[fEDXTXzfƪ"iÂrZ!<7 '`EPaYCwTY 0N2+](ᰚZśgjɍ_aKIh4V,RvK% fŬNR$Ύ+=)p$캽t?O/_WWar=Ù'qu¦fo uo~&3-rF%M4Aq,\17~mNĉ58l/՚ 66:QHU&e!{V-. (PXm+VSU-pXi 5C/Tj>.qnj6elΖW K`R@a).ֲ;`R[ @SFyܝn{:1(nqIlЭ f /^Ul\G=[h~)F'3Pԑ3Sx~A4 ˩a'i[-2 M!j4. PpH{lq; H Nk /Wb"y6&ì{CA l`=<`2 :2v1nC9TV^'I8@w{J/-I9!|rh\J|A?k^nW1:K`TqƆڪ:~QosDvb0&v[M63l 0d1CN\4pTM{]4b]JJ2t[4CXl GkaUz{miw-^dQLZ ;K\6N/ZQu;J[(ӝRo5V~XvqvC*:|}NL#C9yrs%L]uevp֕bk=KUSɭbJ E! pqu#BCC:-]<^yX3l#%EPCæ2}X7D7ތ?`DM#GIQo#fF#S/fQCs9ۣ?aGl$/AT  >At~z'~Cc15pEU[uh 8آJSUU#.Kf;7/ydW.fJ+EHYYirl1Td^@ g0Z-J^1#Y]2ϾgtwL3ҺT1LBi]Jm!IZșJBҿ< %)}0k[7$SI>oF']Z-/+?զą@bfNa=wI@q޽zDQ׮M@$0&&s90{x5=o\A'tw0J{.3 B1\b F',^-VQ&,(&QP~S/ۉ..x"1Sp\d~m CH ^Qz3<(=jХMoP`|}pׇpjkfMvY`|$U+onWk_l.8@IyxrXs3:Ƭ(]Ĉ-]O{ АQA!ЉirƑx&61v != ".^~WUBLRNA)ܯz?])+, ק;kpvm P7#mm;){b3.+#3H37"Gruid &@UMO*ztyǁHCgWW)ɱ8z^(8O5 cZɪ8^yE?oHY\k=W=փ.#W[36iHӪ44[i.vь3k0m3^oS(YEa 4IEmJɖeu $-Ah(9cw6i:dүwOk7'Kl~^e|{={#CI Ʌrdca kGCj t }Pq7+trp1 磟ڋVnx8(Cd=R')ftzW6j&;A{. ed0R*5UVW;V$TÚWPQ~e #J1`x@) ^IKaɥ)f  Uk-jftYT9 _#a3 ~'F~ m?;[S!Pޥ%_u>M$fQh>h鬭 cYG0`%+ ] i9Pek ,5RY0R$V GcGc}ص `3%#Q RVz2EA\9lUR(\)\LzD-  R춴#5`a#FHiG!1h9J)$[f8G_v /QK?~fF1py\r@VԖI2`RҎ̣wK}ܛ.J6t94.~??yto͒t0b3C O]<ՙ!MR7Dqd;񛓢' ~wNj6"r/! Sp>9mN&0o|fx17??$.Dw[n>9;=͞y.6P}3&zO5܏X-h#hrn,D7[ad@~ ,b);9:O2rfUFNû2r?Z͓o$Pp{Ӄ]NR^ֱ gqSG$ 8e'; JIjJA3] O| FqmP]iďl4ʥ$ ŰXD [I /5qO~RgH-v`5^^b.y6?ku";c@яR{y5P\U֦kD~ζrhR7eqOB[-~t>$[{{pU-sC͗l m^˃pO YgՍo~DxUϩܺL ~ ^w]?%3尴7hQDB>fgMVz\ bL')pip~-?-һua!DlJѯ8_z7)Gr11oxC"lۻ函Gz.,䓛MAd,.4Li/Y{l/. .#Xzs_rrXܞrgMtlW?{)\p\cӛzZT;x+ eO3:?y`V^4J#O0~35-,cS0UW /y%Ojr!is!XṆi# >dZz ks3\GUBzU~zieU Qv 7VY<8[hi9CϓEJkqRUBd3?:8Rhe*g<6^\>d:HW. yRlc骂ՀӲTç1nZ[@T:OkŶ|6ݛT1겍Ql+i>-^Fh7>]51[PvW.,pf|WU6*r4U6?"iU6?RBwͣ_JDJF#9Sޤ+CeS>Ny1b1C+Њp+B@J)YEK҅jS/I+j,*BJ+zJGDoBc= VpGZ5R AB C;+YkGp8teM!tlWW*]ZzfO69fZyx(2wμî~ ꩳ?;?)]/_\]<\-~w 1!RL!C0c8 4;#)An{ 2X[n5?t%*'g1k@YjlH  ErيvI4:7ecA I *u /$ >؈)NPJR2Zu~k8zXA;YQĸcF` +Ƅ_YW8b2M= Zۆ. Iܕ*Jjv*7\8w Ñxu cb,u.Am4F/!BlE'{簚;>; mrҬI3ܾliwe<\%4 . 9X+t}X1~Ʒ~w]@jdU!# eʅ(Jr@$:]8g-X:D[$:WHucЬUoG,D[D҇ s8x"eLZGĖj6}|cLϔ~x[}{yK-8{fӞS5X3V ~G\J;Z-x1j@Jlxh:Ϧ~IuZh @F/ާfJB5uZX `7ߦg 5(%,<m/UH"ц9n ң Z.IxhC6A20\X "/%)|G^,tf»ʁ^ț 3~Rg?H鳐gy! Iy> áܵAL@pE9Ca!&j6H-Z޳+ڴҙϾ[j͘{r_j0rØmbSjBiP)8Ht8FSY #YA/dwK̀X'X`1>EW Eq%J8T)y$QT3$AlÞUrԶm#c7nmmPBĻ0:@LޛwkHлuWN6Bzq>n^x6(Nwl8/_n [ yݞS57`{S-ڲ_ lcfA{UwK}4"fo}p&;/|W ϖm,]=³g³S齣ZN n,_eu7*Elr%WZ|2ًT=<϶p8AgJ}}^>N$q{2)8F| QPn: 35Yl!~M#d Ӏ1TIEÚk\'!IsMee3NR$*ϛ( 뜭<@E4E]^ʗЅ_$A+8I>1*mDJBy,1]ǧ8&W}~QvNv(g7^ةQ}?C N l]bs\CuȐ %"^,?^ SXL#`y\ (UX Ԑ '@i y JP漫ɢn[m/-TRQ*Iܯ\@^\L{Ot!+\]tW-N?O1J.&vsI4*Y~#ƈIf28V;sec7U>V4:3s dBB.s1y>U>}TL\q~|ДGBRY+3^ӗ 7A#;c#>绖߲/3}~}~ʳ:RkbqPǫX" =eɊЪy9ˏ'i|u=_f2O?Ϣ ӵ_}rL1B-P*NO&z?|m{4'G1riLVVlU{,W 'NAg;?7zgUr He}]X,D/nH̺݇XRj!吔kXK) uR$}sZ~72:#%-@Xq#B|Kژ̢_ TKcƬ S=.::y?4.~Jӿ,8Nn6PcdYKg[C^con.g]a]-:8-Ikg!a}<8 Enw:a.i L^ZƎ!.;(y[۱,@ @Z֖Nf>:Nw~J0|{ݞ-ؤaM_C6]d3vL/狵qW__j۴֮rYPڬVG֋~Dtm lW+n*RֈZ黲qz7G-4R}'TDX/l(bz$gSMHR>$LlΓvwwc#5BA/IJ)*biN|QPk윕pz{DLJzLY,%*JW^+x{^ H|g`5@V^vjٞ oG!Cg[ Hu !9x70Lwn:N ]bOn&pDqX4΋5čkjfa ;\wgcob#jnkUgwI'IbFJx\V:*pb6"Yʝ^RKփ ^GCh@zrnRB Jh=.:HRŅ[nd<"L)%H) 1*gR4WVf^ɅTNjmszrT=KD{,5=ÁRSB,1rX;KMa ;fi,!8d`ii)5R=YZ6{27e(Xz,-=<F{RBay=_Ry^&s&(!Y#)R^ G!O2r4V rXBҢ4泉ުNJՊ;mjD}ky"S}]NzI B/*AWZj{ӝ;־YefyKoU=n<*i{ϻ8L.?jcSǰGG9D\X.QU}UE#)H<IZ( \%eCOt +&y7ŊZ PqFGBoo=|o ~͖ R]GS hBu:ɞzliPJ%X /J"{7`ʞt`Km{7 #,Ӝ3qrԚ\1Aǿ&T(&88㧿34ܻޖZ_6NOy@/UEۘp|AzQkan+]?e(m<ﰭj_n~wX//EGZ,GL%[i7[jwP{^-\@n;_seavf w7ղe{Si|!2o{;2&ǰe~gu;+c12Cn9[ɇhȘt&(VP on>2߾/!M}ynV4gǻj7Ȼ}do/=m묪&͓[+6mLP8>ĹYt)-tP۸âh 0rrT"/ꛉ7|/6(I;`;=椅%tBx&\0QFIθlɁo<':N$ ?U [ٹ 7&Bw)ݥe{>#Li)wnPDMFOP*L5dbr NNqDkI #J #<2'/gHw8#5+>턱a{rB`)\K)p C $-6 ]ƹKLF*eDj6*۲(-]r[Dĸ6*Ÿ+R=^zlm%S@´ 揌^T+t8I8SS@st!XwKDru@ XЁV>[2wgIm{ VBdyS%*I`4OC=-ěDR(r/>Įf%P85B#l 1Zqx!lY4aˁ3}zExssV7]^naˏ8X>(ۛjL D2M$St5# Ttťz#l?%)IN?m1,)I%QY^zs9| _{0+1G }u .H9Y\]=.4* <<,D?/dޒ Cڭ/D)޳S =LݷI5GT2Ș^ GL:TȀZM03 52+zH k|vofS'䄧S G"ιw?3;{X(9N n}$|fM+8ry%ܨT\[ DH'--) 's8)a\j9y]]- zJBGe[C\9aǻů}݁-Ґ1fo`TO4#z"3PYe|ׁ]}ف=dŎm'5Zv͖j@埓r6wޏLJa{]ixT';~h)" O+~_uZ|QL(5p"y @4*Ύy KB\ĵMY|H31Eݫs.h^eqϵRW5"@v_++P IQ@TI}왯 t;b1h'ӽ 6H& c%j(RyE&sM+šQTܘƒ5h vV)*ԑۊ%i]+l[ƌ \CIJ24f *yMK$Ǭڱ"s"E, _Ѣ"hB15_ѣbոiy{fwqbi `_0i{Nʬ2w[xZ}=}_036~Y#OCf}Z\i^=<-Wf/w\>^^\PSbJjuq[ew_lYL{㒾6_iaGbMIPy`d3.:/m?5Y9ig Խ&{ 5 d+tBt& 3 n7-̼KR>9匠f}JXƼ{_*!)2n))f KhaG/ڪ5ȘJ;d$y|ÆI]_Wj-;PΟ𘌧 "룂-3k]g%)̿Au P25a ~~nzw3z6p:S拽:L5&nӛ8z#Ƈ1>BGc IN_~Nw=O\~ꮍ<___ҝ(8XytT2"ASj#;|eSyǭ|pZ 0҈$5U]k|5eNI*S 2n85dy3r49 Cies7SO Lt9qtr΋*QBꜦڈr` \.'`r)RӊtXFoZ.tY&y݆g$e"ľ ''G͞bhzwI~#v/J)ȘwDĶdܱGnNs+}?2@Q|揮fe3 pb*@!IF 1Jb S@Q@BPƫphH<bx$MPʀ9ulHI7 F%"tZ) ./ 9D^U`1U̎eTTj:j+0u*WT1$˪ϣݻu|q@7q\[BPI5_w߸Mpβ;GUo7na:1 Yfd$5Yc8'ڲR*fY`"Y%p9kqP m-<HFDP A̼+!l܅ZCmGq i4E,ՌFB%-ABL ()11 <ijײlA5@4Qe-j[cY-xå#fGsghmfl돴 2J{t!?T|J8(Jtub2],IkJq>X $gƵZ%L7 ϼO3Eļl_7ٗ&܆Tk!N܍n i-]oEuggj7Vu ^>=؈+/y.2w7;mi+ϾH׮ܟ_7O(ny.pd5殫Q+?mOOl 9{D$~%$r&im`mT7ԫ:iuF5mUڰڢQ6Ҭۂ4 /RV kRUCoT~ Mܽ_{4nj_㧿}67܅0ח8o8hg(GXhG,Zh`Z+pؕ 4!; z{['K^Mw@S,:x,zrZmͨšɺm o۶.pi7MSskdm햭h+JՊF%WqJ\ 3hʂU-45c"Y5ECUmda)-jXI$]5Ǿ mzSӉϡޓw 31]Lس3ٵv2*gkoWȷT3 SLoe xɡۘ$ id҄s\%ggr%Wj7sͶQnվ`î5vb扨z TGJ0#饃(d1=/RpTL/q ZqA)QO0o+FezI#ͨHUihy6 רQMnrʥ繫 .t6xZ=lN㿫8̡,¢9<}<1JZ~*5pW(nq+hdSBWv0k OygNC.b׮Z /@T5n6 6X%`RPYcPNg(Dtþ ,uEd4>(b9^2GZ(΄]^)H V&q$TEwr[-eYQ-j4%okQ1 MbeRkV'hFXK0ZaFnh VlZTYsA% f-ukPA]0TS̨2yA0j={$%BO@yTy[}) kt仅t}n;Za b\j#Nw+Ox@O|tJ'L*>6j)bw 2L,s!2oY"˗F\"-q 4Q50gGr^\#|M\I26y^8zmФ)_wѭ3`%Mx~?2v]hddAkJn(d[ CgV\GJڶPQmNBhQ\"tEY-UKauR @iRRQ1jހdLWM%bQBQL⨿qH-Ć+RUE)CgYrH;d3ڰΧĵA$2W}GJ !)Wi*}d~ыC)[{)j+KGg)ZugC4-NR?(߻w;̷ ڤk3De'C8*e H_y7+4-'Z/vF9#hMbM7߱UY&$/{[@H9=J^܉胨0ٵ8Ӂ' >'`F4LQ؆>8PS$o:,c=еT)X!jVҊg&4D0V qNϜ(*tvS=8Jн?ô)7߾_i^#!I;΢;VP5ovث8zFˀ"ϷSQ`d*HG QWʃ'6* On*=OD:#hW=8P+يjJ .VȅKkJdM7׻Ep|_JjW<~#1g|/S'(qc*EY <",b\Ƴ[f], Ƞ2ņA&%C.A/W5/%{}r-D2l>벢0̼]Hc]6{`Gњ1/5FuE^Q<` 8\Ʊ a^UXj*eTa伐ԊV5g-0,oD`nFV(%ÏhTUuZɦ(+ÛRjV):~\@ )k1Ũ-෎16*K P\;mq4WBzS\w iPל4ǮWSWyB׏IpN8@DA/~l{6 @|8[kZ$e:hc(}U(#h KcIb6)^z֤|\qX;1v"h:Y)$̈Pto[qJ?u^uؾ{dFNdf q<~)3rVZp6ɭSDăz-3]Fe*7v#gl Jooϛ#BFX/٭635R*  å% 4KG;83&UUJZ7FaQ3+nW ^ƀ}&B`wT#߻rl jȣ$q*9];*SʍL)]`,g[EY|e8z&q04jƲ^qJ&y:xGܼ>EW>tX{O*&!ݮ%.C_Qt94|CA$ mƊy_έ"M͜=ru׮"|!\3 dNVlx7` H"VZ95 :ݗ;Fmx$bMvx.H̤^0U76˟.k W]ntӃmj(BLp';S{ٰ%0\΋0/ZEv-[3q`Jgr B.3©EXj3mZ]\_< *I5[ϮC/z1nJ0"nYo[ ܣ.wI1L(NkM'w ; pE|]ƵKW>ec*"])VHe-d֥⪭*tSq],> 5dfXQ%BR+ EYWTmjYKDC Pb03ŰҀpϷ<8xhdȧ;w 2pO@ ӵ㡅 ;7 \sp -Z tdz I0ɫn{ęG]cܚP^=)&a)p㡯 A}BaqړbJ7xr/r #)X׻Vq,d/eiyZ8ҚS9*KR–r45pQͯգ`r?2L,qSpyY>b8-sxKr^9Z(J8d|%!<9v`vޫ:w; ]vz8r+{o`rs7MPQ'D aO 5\A@Os1 QK͖ǁ@9M닧G__'\˜0+¯$G,Eն~܋ދ$X :}Ϣ}eYWc9W;.Cݪ}F;C1&f\e6&އ8#=q.Z߲H!Dr~~:⭙y}^kD$0E.'|S7:e<>|zxy{ϕߪ_&ZJɖz QMaQd[iΤ\`\bٻFr#W,@2<; ȵ&/ }m%V CduYnev7bXd}% iʼn]EhEUjzUҪiPLҪPD먭Vf,Z 4ZяZJѮrXG-zhM;ZS`?贜ZՄP?V-_GVï÷уo"ZA}tQi꽵lzD,l UhN}$ǭ|v3vԵXy{_6/r!,e`3ój2kf|4Z zp.S8[T\띹8?kl~ƚt:wyWBj<0x,r%Zޓ 0p67#dB A|@a=.Q>>9rygա_9{TCRƶ03u[fcq%==g`6Qtߚ:8?E?fxcw${R1m]>8橢jvsԒ6=קO~¾>~Eܷe|kd\bí滛DU1[&Y0ch(ʼXc~8H{z$OnBwWc /0*^?yw&")tOy Z R "F+|^T=~^UG ]8*{19r8rg6HH8HrB~YR2Ih&IolxMmfYa.]t?6PM˻*"Ϭ5 2 ! ZwvfJd "B1<}w 2mE%(ˢPXe2ƫ?ޥƆκ=xzÄuͲۇkw7SfPJ['~@mT ;f{O*D>Y-,$}";.YQ"ˑf *ai-וY^v[k#)etdZ ѭ<`ox]/t=s9 Ćg?u$1g{D/aB-$0w _AI2и ?"Xw\~sr>2"ibEo?@AAA d0XΐdB)wC&**+V,fdEeZ 6ʮߝ4?+nYg b?t5pO#F';ʁf Y4>xv3?•`;Rf < #} cxW7>g 1D6xNj]Y50 TD)XT202"JJP`Yie RO2M;`/#[^0v Rg*+eS& A%<U!$c(Lf LQdڔBn*Ȼڌ0# 9Pqƶ8ZrN`AhjC\ JRƴ_zyhLQ`T2|,H2I4P\jn{)v ZEcGr(4 HK0Td/rH"!Vn"BT@ 8hm;5>(y"[VXpdUe (rY@LR#mtWvacԨjn]Bm-`#DP6Ҹ}-nR"mw%(9E!4rBW6 u#&rXEF6Y,+xQW*diy9aced Wׁ~:%IlVׅ{&OT#fa'')HZ`w2dyg ҏp*^v'zMlO]]^ Oކ/jZp{=z[0`58{7?9zۇ)G4}j4M9p©avStKKu`~m8U-!&ytG)#$i\G$@+Q<208CtYei`a@m |o+JW]"% A$ t/@w|@PueTST/ w7E=y$'gnwN%%Xm#߰s]=~TEB2_k㦒I:{8/Jnyen6ɏ 7!7kn!!=a?'{65Ι UĴ#UI9l fj_4=^x xc:ќwTK(4ӍPsIǗ'QsLي7 V@2{氏 *)^03YK) }qhj SҘ*+0!G:Tm'L献bP]L 駋$TAT!_,tc/]` ȰLݵU_%J(*OR<% q%rjdI)xTe1U&DBi#r]6.Z<+U!dHnE3$QZP-/DPTZxd}trZ؅Η FTh~l}8RL Q 2ջJkzVhѣ鵎mo,癗ZԿZό8{H7?1lZ[N*xpuyx6ZO`KN1IU$%1݋=>F咫jpjD{e?0[g9Tq;O,Vû~wU !hIʁvGis?9ۇ&Q-\+d.ad2Z9þya7sٞTߌxҔc@WFF"z߹7L'︧Eb@8Fiaŝ7򒯌.);NQ " D&E;/>jI%>- &v4xuVجm2t)Nޗ|@u BLY6|5w"#?pm kbXņNy*ʹ.n9ݷ m\\s@7aW僙eC7TZ%雖hWeJ^bcT}3ͬd<|p)E}]7bí#zXN7B񢯺 ̻7n),䃛hMqGM»kaFPv]Aݢ{n ,䃛6ewRod S gNޚYaxSK-Qs6痋O_Wbr^@2\g>^rqe_[3\^ '5j#YW!oKe]J< (f"0DHR΅>>, @i0v"lBDf@9ORhr[$ώ]d"0)Y-4KZ(H:L!$E0BHdd2'D1!jUXSU,g84pB+(x2W\}^{<7!_=[{MuH mIڻO9*#ͳL Qb*j`Js'<Lwp̮tR4\҂rAzO!d!B;>6?"\UҘ6@uMU@29)1b$r~nQnY9+K9i];)KX1@*YJ,(& [-8]3NCD/|^j+ \5f+˞!lYDbPH+kA`㶗J#_EUDS5VDl*t$Qm ƕ Vf{g*,eqF`x^ n1Wc3/nFg[jTD9\$tx .f RLUU6ԚIczY?wobҶYX$^j1 ǫwԎo\J҅W@]Ax(Q}_b#?ֽ߳L Am_A-t&_ecoҥxz9ʒ!7y%UAE290@ey\ЪX *U0(_NVϬt*Zѡ.uq'_#)@Nr U!I.PUčuGjUֹ6$Q#RhDfvix/?ߛjb\[EUݐ:djaz k70IlIlbPbav JOO7&d&ڱ#ctkN"gٵ:__\mh<`P#ac7ǮͻiMиcY3_{מŒ Hǥҋ5k:DŽ~GHPR?AJ8VT*#;jLwՊn޵b;Gq0Պ )'xEVڎL :]Š”uyN+5%0tmЎuPyPN"y=Ha@>䇓V(b>jAA>#<|Ԓ(X/'\} &bZm]ˌ`QY_gQ ^47ԁWo57=WF'֛F@MX7~n >-V>vbJPm{h-|p)lkD#?wՁ~#ĻqÔy"OB>NH/M~ț|ް =K [#1让~oUgfb5vdwYqcQ~!,3lmjp>C3Kjq7_n mIgZ*\bkttawϓ' 54:RQ,xrv(0_"-Lo^׫OvD}Y77![)IsSmFc=yoh&\m1R.L^:ן`Y)ː=7->t|]I{vK4ϦIDHVp]LE3ዬ-o`*r~blXqv>T!P $c*D4? 40KRqFDb+5AsÔen ӂJ(SJSy=7-L2A@\4 "Is"TZqID څb'`bL1ѡ! uPP(g\~nFT״Xحÿꭗ*Hq=} }zbd{={ {uiCP%怃xxq@Q4 s&8]tq6 Ijy% džھ/1Q ^!Pq Цv$;Lw;c{L,`2-@wr'GMDC֊VUE$ Bb:p֪ۚKk»5"Q~ԵYg1UvkӍ!ӑPi ):R(DBD ASfD1ƶVAc:L:(KT YyDY[_>昲PI ʋ!nqSn7nM׮Jy#~Lh5Y6y}]=5=]Im&l)UOH8ڢa bQPJxC闝\@JY`m`C320RJWa1%(a|?P뼌 cQ u$KSTX"tM1uv/yj(aR^>:i0.LIH |\G1AԘ )|ϖI$OIW\d9_ [/^O?P4}hs$CA{.yN$4dE/wo8JBϺ'Oϙ}^7YTl'z1ǔq7{1 =]U ZhMv !zf\Dqˠ&- TMuNN[gun]'o!Y6s5$OӾc[1CTj!֖B.(nR\ʝpXam^R vV$ KE&ڿcB"kS'$\.WscM|52("EraddxhF$8D˯dBJZ 4 T*$I۬<(DP0H8bLӐb2gnoʼnX?iSowM]܇II6Ou:TmT{*Bl?ۧ4\hI>u`{w},JFO_{x\mPzL}߱K8zf.oLCƕ+&z˦0rl瀋!f>Sۖؾ1o심YF{.l@=kͧ"S+]п)Pw._L\݀-N'[46_O^|2]R|džC,{#:l쯒r,Nn?]ڲU(m#Bpb#qQ@qJ-AhKtҘBLTb[q7}˔6hfkev ۧ&hYyg8Y- yVw Y?4>/`|9= M("*,pƫ%e4*dVs0À@[IY1uJ "Q!OJvP OAoR)=zBV簯a3gL 2JI=KͰ"^M}C VX)xcJ(sRaaR\Koz!Yu[)~VJQB2$eBsS\mֲDB*H~?p⾕Y Oź(HLR^R{0Wg抲P uO60mq/xSm6)T;6mMF,}ImD@Jb겟ۨ{EYkϽJ ҿۧr.dB]dx~ĆᚉE 0Q2oޔӍ_J("%kPAơq,$?}GIkmzd mkcۧqᚷ@&k8Y9$d.Y;a݆8@2XEd EŖ!iǞ@-*/ TW#';l%i$PbCU̕Fšˤno}g"r\&;V^x+Tr^L/ ٽܔi녤Wn-ǤA݀{ Зm"]xs!<ݜޑ\)/OE{DX)p-!&bσ Ae@r5E5A&.Fx/boIW N7O"!LՅ%bLׅ#C\]y 1ŭ7GSDUӖgԐ)Mi NѐqPV\\\}"fvDM֚EbCT"ktEnԡ> \HTGRFTK^RRdPAb%(R#a"ʱҡV*:P~6YFubѡY^m")^yoc8P&~mcF)J 蕞PXT y 02|vp:Ha ˷_o =z[G8nWcLWrh&_^9ʂ{09}[ݘ(kLtz1$DW$:tZ W~Wp!0|5<ѠWЖ.&’|}©S.W/H$v;E>֔h a2"J8 01 l! 5h͙WofCeT  /^wnw*(@q |qirBN!*0!6X9g L >5RJg ;<J&^,6OZEYdMIivg91l(E h!tXL1QZE@Q_@ :t'qQh T,|w2s'3JPp`PH1+ *tSFœ&< 1.&6 l0e!7KlnH/1x&[L_Fvthg%7ט Wzy_O=Y}V~9TL9pW H2Ld" y3ldvtkNgs/3SB/W OxLP҆Ի^o֫zuy?4nEps׿tO{xY5{Vzbʻ ޻dP1$5F]R֕URYiKIMF "ؙ%aJ/irSۨr XA;?~yK + 3#ɧKMQ.R̚=IN1I(/f) 9]CE)6Vlhl l=l rKPeumo{X;R]n],_h/-2@[>Gnো$2x~~6L;i6nR wOxrda'd/s*ezRVr-cS9wcV9 ޭ|L;x1%9V>iӻUa!DKTflkһ1`mneuc:eQĻ]pa11VwB^mSBQ /}N^hg=yoIv6Q@D^}<ꅣ*ټ'xlޝ<>Pr[ W'奦v 8ݮh+&RS1_][5ʽJK=K uVzV0<0 ` +bJhWo{.RR3%ҫR93?{ƍ /s ݍ;T''Zo*O[* .6crM`HIÛ\HILRv$1 t(4.Bܘ?83A)Biq:pT173 j4I10ԪP2稁 b ɤOύz k*._VOLe_i=0C7ިicϢψY>ޗ#]B>[)z|~/H@7z|dfɧˋ4)D`Y z]@_-JYzܝKn ׇ']@ăU_|B9ۂ(3F{Dss:_hֺJbErp~}qgmP ,`(T}I` Ŏwnhvw05 V2 ʋd;#Yf)H> LuD4SKU'rߛ!< CiB_Oy& J+Zqks!^I#b=1fD+͸DQw[*E|-N8 Sx*lw vҡ,9;@!D MޥCTO&*0s(ke [>]zIy7:,dHB)Zn]f*ju^^)92EH@Y^@6j!1VH6XEl~ʑIj>m$zx hiz8M>(nF'\f-IهCRFQI@e7VSy<{UH-7`kޔЌ=F*ܓ1mݦuNi+#z_p^k:pJ5+z8OZdZB 崰LHLhʻC5B f(XOuZ)oJI-w4E#i:FFLPR8^D y9mLv? / ɯyE[ \?drm.:''涚]ԧS%}Oq{zoXh)s3oo1%rZ%H5ӽDhTfGe5c|;콎,y^Ȱ̒g=ױFG{ 倚HVǰY10=&`v{p=A|/CCK$☠_CD{x8>& w"w nb`b's=(zӶَm3 B:*St#[8rjI,ilmdA)'Up$/mSN0Jb%J!8U/%\-ʀ tTMO ^hR8n;[/{V6 |&;*S*5F)~_^}5ͯ.5Co IaM=>ō@KQG~DmBDL\lW~p ,$TǙA/&$h Ut(?(Qں&1B(NC;iksɶ+qJhl".[ߧ@oz}v!*2]Td*S W:i_Wzod(Wg#ʀ/&NsHrW>ʆOf~nmbޥT:fe6ԄgO׷sIg9Ygzj.}|@zwa.gDklʫ._~C5!(H-pOv~ bЖ`S%p$qJgdkᲜl=CAc@A^;|@W_O\FgY]^'ڊ 74u>`^7R~#Yo*W9Ok,`wuκx< 0C@*|d'NiC+ g8BmO NOw=Oه@OPO#Z>zM=r )p D܁ksOAD:5ZNtih'DE ^rNEr ;F4cD@; LU`R=%u/)iw4:ZAJuiKYA: 8(\)3RӞ!ԂPl>t`z ( UxB֪=cl!*/coeH?P3z R2q ܚ[!,S5Dι2jFujf`2ERjCPAMtDQE$+VkmtKV8e#nåvX,ȑ9ҏe[(|znKڎMUÏwñ@.E?@>7 rV5Wkr.3@DX 3_ߌf6_t:;wb_×GFDɕ@ތf6~Y % l=c4-@]-el+ɵ%@@-ILÛfn?بѤ&:\dULύ 5"Էx(&fD(W텱[9/.?Fq2N^U`W\,nF?LQ՟ v.Ənq?<..>go"㨳1%Cyٛ<ƍg#71^V*^ƅDM,TJnU(+]%Y.(~KrUKOɉ{s-1hjtqҕ*v:D?Z:>* q&qM-/;O?F$ȼU,h&‚Hz2Lɺt2n g:p>*Ş >*P'?ۃCJS _K!'KK|s{['I "<.dJ,E ⯪:*,0L2Puza~?&Wο6* ;Vh"7gVQlA5O'=bhJc)Ǝ8Q/ AIf@޴$j@J q^}vƖ#Mn +KjM\P,$2x(LI b-^c!kI;,߁j;^\/QX`ᠹZC0bɅ5F_aP?ᢓsuO?#{#@ (hv*)S:_t HlpC MM<'LEJK)-8#VHCFR X9FƜAFJ 9>6Cb ch*/iɤ%YMRhFF9f[\j6VgjuW$RR BAЊFՔ@8ɥ6R#YZj6HeP㜼ȐD $}˝BjrQjNYCΖ3S}(PT#QXkp'!ZKMrƴ֠z=u@2n}\IN;;5(%-WZPk1'akĹK]UuNO{0g `yC$(=6̯ן85 x paF18{O.l/$a̩ IWz\,64=1}PF%AJƘ888ߣs:/9ɣ&UE`72ܑlt"AA_cA8܄ү: )uH }&6C%>t^9T6@}q_'gZ{Ʊ_1e/at쇝"%9ԱNg%-ۑ˦LRli󐇇LF{"lQ2םfȅOQ9 x(fԱvk~TK}k7*GCb\S\oqn @fh5FEۭSv3inYϐ s "pj|ǚK=tz7b={lv>Ԋx9݌7ڥ`Yu^KMaB sTЃeN( j]Ms( | 2B*ξ*oB8$SX޶Z/֮MU0am]BVM%T3ז Z-FagUyT* P!/uo_ GU̇EO7DA\*f7y*h7p&؟;A@ ->t7A=*aHgm0(5KͤD'Y:]jvVW!=eWN0 .'K͑J-qtm0jIv*洪攊"a$ )j5 =Bݩ_k7d؎QE5g}iN Mٷ.5n.fȅOQ9U= nHnL1hOB:ofߴP!>Ew1SZ^WYfe<, ΰ$ #V @&+bx!l"r~È5@4#J#Ơ&ҘL-XNU?I( }uJB20P/A 4c9*T&/7s%OQ-C(ͦɷGh_1'}7o86SHxviɯ8E_r|}ɾX(v%$1?ac&O*}UX51s9X+>0?O`;Bp>Nɽ|670d!'TEWG%(se %pw}e͹DЀqeˉy֓CB<}Zd W% ^K煌&L~ܘ4;FШ3ԀбhqFXÌZk Usy U޿moҼ=dgW#1'RL[;&B: Wk'w**0 Kw,!q1N>.%wj'05^ 4w#N; ]ixԞaj}^IR;RЎ% NX % U, f֯pztaF<{1B"s+2tk y,h` !0WQؾfh0tBSg+JEc<#.aBzE 7E j/ f4th^Q#N,Z~Y ?H$'|w>b RSq~$A!*HX~p6}xRbKoR_Zh{Y-j1Zh#7y YGS4QB 4ckA[-=k%`6xnǯCO񢨓cfJnc/KZ*^ϕӧQp^\{D/^\u;"t+pd0dt%G]]y%v% iWZyeeti)z\>+,,ҏ g% ߴcu6|h sgOcѴ|7S%Y!mEj[%sf Z>q]q[X E&\zgh|G0☆WOo+%GIA?p%NV ~|[Ia5n"d>tFmAzixTRǸhzq[< }cȶ@}EN}y!%c^i ='5$}Jq;x=NҧQZm=T`TdRQYN6Iky8m+ pj!ZށMP Ā3 BGq x X<18My½x \` #AU#.QP#@CcO(Bc;ǴzBB3O`w.\:]qQ}N`ȓ)Kok<)s)CTG}@~?fOmO2X0:p󛫺C5%`]doy;ua<3 %e YIfJh":GuE^[8#~~qhDA+;\} .Xg\,Pwբ)`лJC1 iAa{x]8T=Tlf^|BO}f9uU{1g2j6To-\`| t h * G]@@PHwYQG;F]ƷŅ^8F=;9~aɸwC۱H3Y\.ެ9uw/F21r~tvAgte$K- v"F*몀a&vCZ5LT7u1nD!TWpUJ 2 #aC9방`g6 At9l \ Y8)]؇MciItM*ppgH!`-JΫƕj+κR]A]W {,yemo =h{v HeAyD2ի,dosD`>74^rz(A;_jE0@:׾d“@!$L3sdh4YN }{/> 3C%WO 9}"$"386jB?e"Ciѵ2|Ha诌E޿A%od7<͒핢f) Rݏq)o?QB̀x$\}>]H a-dgh+`0R _xϽw\I3}wP-Bz7{u,"ڑz hXH҇+a2O/,GDNz՜(b D!(ʺ=]B$GD9r7tĆcgBrRhעv?]v*95NVbl5{7[i_)ۚs61aB?<.s80CG$,$l$&}F%c΃PlPڠV/&\Y!56dŷ(̘yD C݈Ws uQt@:F"IDG4&Q1(Ëb۸*ah"\x#b1M# ]E6 =J(b9v#zJ8 z+WOvvP;K}D+] / pť64x 8^ 3-`Z^^BV8 < g_^A }t@ teUJI4ꒅ֠bF.8p ]$qoWtEF@M1uE V5!xpX9ڄgt&Wt9he*˿f$+ǼEdʣ~n7{8^ܼ00}׫ڈaIʾT#pKp 0 :8ǻ9mPM/, Q2"\+.VKbUiIpO7,`zQ(䒲g_Q($w|`Il{Am =.h·@0w=1WݳsjV@nTB{ʻWH3,#^L -8/Qx^n.L~5JKiu/^m%7zPam~| !S%YrH6#"wv'i"2< DALSbfJ)dR>NRń% ibQHQ*/> ߛL͚}O|X-g=ہ>.ӼG)IM-GK1{07L?}%Jf~IZůDOQM2!ۉZ8d3y+Bs G,2Nxz$@HrF`0ԏ!~5\@sdʡ_ss%c'}ZZOⰡ@α{^lxTSRQo;} ۝N_vWxTKH AO2RfBf)4etBTJ Ee!l:~=T)Vb:t5WhLq!"πv@ V@.}qtĎD j$+ʈ1Hص˾uh2K]M6Zܒ!)=`}j@77&q$ O~sH Tܧ0%'ZGLO˽kqܯYIRμ8Mb|+>ţC"[1c3%wWB̃;J^񗴮֤Y{ F Ƭ f{{~Pp,،F 2GBV$boYcX:o/7Ò?#B=`cah2y\he#:NZPOPe )kmFEbwFE2y;$lGFO&?$-[b}AlX,V}%Je4ׂJ1zC__YxAjlgEQ%rGc,Ck]q.4VXGq4>h.'`͡^-絥 K10?Qo;FsLs~ˀߡ 9P#(ǖ/;òҡF#=HY40C ~ \a_{mii];²i'}˥<}hL?D(²jN @IzECI6ZQ+譆8$ 7N ئ}奚^iग़^i*4i40v hdQ #V{&p~S^/aD숰֛R-30g-pi zJ2 Z%p͢$ԗ Ts*Du(i^ ~ ( cDY@ U{!FZ2se4'@"F;d%q"J0Jbt- -77E;s s4eclCֈ Y+eʴQX9Opq܎.F@Pg s9,G*r1j,wW3CU6SAĿ' !@Ulf?pL.q{kbW8Ž^C⇨06\/2;3IcTu昶Cڟ1ⲅ҇!5"ԯC-(_T榌Y>D)՚`wH.F`MSkiq!\GIXa4B#+?ߙ`rH*1ZPgim%V*Kb[oGPEJ>Zˡsȅ#,N80D(jn]qrdl2\!(ZK5ҘpɐL{߁S+<+ Xm Ght_1b S@ q[c,AxFI k&*n2t;BDdjPِb!;` x(iKQe7}\Bɖ.[#"+mk e)EolS)R bB80z-yrf`?c2ZH ʓn=ؤ4`"+ἔH vѝa>jک&EJˣyG[pt6Hq)'cI@'{a(Rle\iJcf%b }jYWD4<W-y&;WoA.lP" # xY1¨&D9s{5TcƐ\lTzFr~lk`mI%  9'SD aPgaw<F\9흃Zm$"IbB,8>3l8рMr`4c9VaKN`ZoW Cl(I紷PKm;AV5HDZ\{SH!S\9b2j<du` DdZ.pF(-rRRskf?`9c%Pa?B0i5̩1iZI3sl_Q3e%Af/O7Aq_i/f}ļq_"߿V@?O_~~$d*X>⛐ٽ/pE)ӷɟg?|z !@] c>]qx)i&kXxyx +{Wv4O]#) t8W+rc`GnѼ׊ '[oQJˆCwFA}*ʯ5kdD0 @1(x Xjp8umoC % "S,hD"]^ezm7k ^Mn^[]! ʫ[΅ɘPhL2dD/cį39 f3"&d.#EI2R_~ Eqe_(*d:Lko1 d'3eaJ!~znq{KVu–-reK5N|u|Z;^ thr,W_'[^f󏏿?˳^+1qsda 帋zJFp=|r;{X?OVݽ+D D"lTʱ΅Y>CݠҰDF˵h s[IhcʀX <KN8SLvχ[D]B΀0y^<6/sU ?cW齛LJãYϵG?g~'/I:v^5CPr=H"/%a0?)hw} ЩК/OT9Qi8r&CoC̰ Yn0!\ H-ڦ_{07x^ϖglr;ϴCiɉ۫~Z?va[_}̇>PߞL֩z #^ծz،h7 Bw#|u<{rLfX)bGQg4BI;! Z8n6NrXLxXO 4z '}xOI{"OӃI"% ]c$UXtY䡘O <b-P*oRnй+ udx*j!CR{0 į$@DHbC*|숍7MbizJR&G1&EHc5^q;tNp&1Ib2 asS\ btBGY{Lژo/>J㦰Q!NӎQj`$GKTgkU>ws$8ʍMTu7Z:% =ʛ8(į8F3=WB˜̗q>̬tcG9&.ݽdX,W{9^WO۷-/Wɴ/^7V!҂_M97^ٍ7Y(z ~M,SrdhnPriP9IQFh͑5:eHn]B{r1 i~>/a\AgM:2#TܽT!bx^6l5[vPOv:,.p!*Շ63%-y'xX'] (LʆV.ع.ڃ* 6|5ˀ{r _Xlcu6`F@rTǏg썟=Žb2bo?X;GF9TRx. d1K2r1sk-M"vbe{Łh~_fM[ޔQ~1׹tV/YCS`>Y[ {Jv'JDPjN/\hFDY!D׬>4!Hid@lT`\`xإ`v *d׋[{X#hP.~ܿZRv/1t)Hz4Ńba^`,]=㵊J׈ڜ:'ǚ J8tڻKU^]INWzN.iy>.sZSARjX&YvM+;^/  èNrKT,s盦EM(!r7eR7.\AFе*@T4t ߞe(hdymtڒ'O ltyO@z,{8m"[?q>{>Nc;Q JP"?|C8 eo;ep~.+ueWuI@>LKc_̊ nIU(۩hj?斿H}.-{@d)q%2~1؞ƚEW^&'gm(+kFC\EStJ:5n׺0̆`R1QwԱn=-^vf"[r*SSޱ~QnuΧ ^^VbO˚pYҿɠ@1ݯnk{۾K Lxah{lJl15ߕ8'+_+:(t?还l>ۇ~ƿ/ R#5 R;w-1M.'ʥ{ ̂IRǯKQ8k@q=0*>4M6B%f`*JrV"7-!a47~N^]<)_sAbJ4 >72E2A).f׹;* 7wGUMUr^LɉњHTGk,!W{d?+1~QIY8ko=::ŷG|j}z+<0;A5!\U\w2jUx8PJz$bV54mcl|Rd`OiC4EZ>>8Ul.NcXjDv- ?"ŲUcFs- =_biS-!~ E !ܚ.Q୉K koe$MT~6dtɐD8m K:DMQc%'EwZQDz68MʷWԕ;VS~x p踛DݔVgǸ*qԜTh wXOOKx( wUlZ;hޏ,8dVSY~HWoxĻP>8x_he#VK1%Q6#IB/LQyE}px;n{eB->7Uţau7Ȍ mi@[鱸,.z,֣3s>9hGC[|V-)3m}j('#3;ۧn}^Mt6(^DRS@ܬ^3|Y8^"-9._`bWlaϫjl#Q3"ҿ.ZZOviJ^ϕ(U/~,9͛DT y/܎ _x} 0Sk6(sٓĤ.c"6:0|"Q9&7ޭ$MH9$G)$+Yv40 vLrTr.{T_gR.[S%=nB_R7h[(_mٮV&`J6w~IZ&Lh.?.34.~9!eʉҚlkiՌRɌz. ,cMI%+Tht@d3JŤ328kK`WMB=TySb/Uj=mI}y`7a÷wqv/r̓5\_\~˻\<2{ ų<0WWWrgAkެaZhve/)3&b\Z(d!d-B+9x/@KVn_[k 8Šؖ=wB\ U2_N*̡*+v.s$uʤx30p> a$[^7< +?v?,B[=N!~mU~(.N\lEGrށ}(|X u.QTBZ GSKjB\rv%uN>ZøZ(ڕsvCeh[Y¯8S=T8rߨZ ERCb'Z/{q^ .]D` !U״ P>hsUIR"u)t!-|btz=GZkF1].; r&0(N+A닙Njmb:nr~0 qЁDj)2y*-(-sBHL˭dw1,d/>>ɺEsg$c A/P^(jcT,t0ߗTƔ|RJ_^=ZbۅGHoFbr?W$a< iT&%nrɺrɺ~4x( 60u 5{t5 c <DR"D{~3ȿzG#QЫj%}EE=ANtoйd6:wc\xG${:AJ_Ү}`ò*r X LǟgKn):8?;1K0sIv|6WBRڶL"G&UW92i482Qe'~ƀB21$ 0i]# :)BU&LB|ç +Nj;E}% rH%WZ^j$2pB)Tڈe8RƫçʄI^^E:8筵N"@$%c"% z3|A_"~fnry@.EMŷ{a:_eߞo? q=~={Ƴ!G?#>}[YZe- 4' &>LO)-fW;;u0o!~? N6TdmdNz9r9I g$ᔳmqi(\g֊$T*Y荷4Z ɣaWHE-3aT(3"FϒfQC%dƟݛ%jI3\7 /T5 pMd@ Czel41H|=% e +jO*JbGm2k68-˼)1}Ui\i DP*DLQ[%s*:}JR0rErLKIl\6nb,{qn^9udgD+s\:bJ)JˆE܄gx">zBW~g曷)jn);}$)Jl_E G!tL1xGB.lIC/9R~Գ9jjzE~x~rRh%Zgq[Gr Jt('o/$7RГJt ;%P%W}o3,} xF^8(NγXݷBh m/W\G+05Ub##AM Y˨ǜJ! QpKJ@\ rt8ǛXդS9jɅwħ X IEĕpc)Z#: HiD"p`=PqkqW& :Nk_lSKeDjL<놦TƵ|tg\Ν6r]8Yt_]^1e9Z3!2{pi˻OogꅁMcoe,0pa5?chg"T5"ֹs8VmAAT,XBr/l9 Pik+(?F;zp}tG|~z@4ǻڗ}%96}5\7A_{7o6bTP5 ,F"w x9-3'P\!ْ@[˾|ܭȰg 5Str}9]х#ao!x~E་y4D5䠱BhzݢaZ>kP>[DjjW=xy#!ބ =M;Y0ҟ/;]oʽ3OWx@>ɇx[;+o ~Y8`%T|kZЬ)?M~ʣ~9u>?xxɇq>޿Ƽ|᧏`<|ޣ87(W7A䅒;(^ i)ޥ);h87\r= kI))#nYS``]_v06ګ.KQ*8'-/}ӝ7P2K̞,SI*tfIVTP[;u/W-:@(v܊$g 4ņg,5gx={JUVwV8F=R'Em>'#Ə8Yݮ־'7*ndl;=Vڡ$)D3o[^:C@sV$9(v"}AiZJqhInu Co5/+&V}46OM[}J>L+S# gPE_A2gf~b/1Eը4=E*[;͋g<09-Q`Oȣ6:,(c~q5W0.5w=~fHtb^0T׸SPl/pFvO؋`V]{E\"nPh]K!k xs5Q5G OшPn0kjDbj썙ɕ7#Ӻ?ZKZQJNjh,$5X>_6ۻٗ^:Qٓ=9LND 'uڛr%RX!wz.)NSXcΔL\f2R,x;3Vٳ]F/)=IC&9)2v9B2P9(!iEPzHWrs2}B R͵˷?rxdff<稓"gt8^3P̶҃^yW~땟zUXM㞒I(69Q+q #*,` a) >ciſsyd F`tݱBՉ}yrv?0:`Nv\4Wk?3q~n:4{˯E3T*,gJ*WsP 6 A.$" 'F%T5Rh "*i6ܮ)2hb)$67t('Las"L%?6KN!f Ljx[N%`Qf$F?>foƩ@1vT(= /= P#=X# c g=rJ[Re3,#)eȡ$1~g ,$GOOa/wy`SLe ojAHI 3H d61ObR3*"V"qՁ TZf<94cYpYLWmvh1Z?RD-ZGM*Hc\ʴ=g@hS 0S(`蛕  Xp(ojY4eTQ&,(\}+Gc$ofWBV/:B ie*ͮg";A풝h#dh!tnӵY9!fbMØ&Τ_* ma{ӎV c7f?n+Y?腨9o]U' ^umƎCBVC1r #$Dm>yz%tZfiHf`K*$labg̀B`,əhsz0~?dw9+nͰwVxjnx=̿ ,=qJc+6\86UBptU,pW7d޲R7k7uF4w'~ץ8~"M)~C8TbX4&>ㅢ@\m@pq(Zo@TӖB+A|c|zjpTZ4C1CFxܵPp*,E0׃.nYpJ8n,{2JA,hO<(x a; ;VLLG!Kqjif8NpuB,%&q2P%cZm"QQ $%D6_wW7m&.p3}_t9`W_tЖY e5'|IHxx|soGlA\4Gkqu'bspAs0Q?~|[` O[OpIO;~zdt ^yξYd~gn  H˝ۋRpj +֝7,|Go0X#cdτ;b`Z&.I [2'PRd+"4qJ0sڈPɔZ`ޣ۠tbR % _[#WN"Xk)5ZCúN4V )i ץ@Дgu4GVGC!p|eEij%fԩ$IMXG@]ib׺\pqv)u5&D#aF"2ѯ"朞y=kQg5Nvff KjYCbb 1UZ"ǩ&1:\0a޳ܬ9>l℣-u1K~Gl۟Bɋ'EYߦ@jCl>duU(䨏(]jZ3U#HZ1U=Ʈ!kZ"DM5XYh7zoo] >\͕4_xMf茟$tOn{qaW_|=_t>p-}*l옗sVas/r ^^2$QL̀!ELFtb#T &F6,єZ%"˘Ĝ"| zpW9[s=1u1>SxOOFEqp|m&,ڃλm.(iQWhsk2륊㙖iO+RJclB4B)L22K2 m$)5Dq-r*h>}c.p{մ-sͨT/9qKM%bUq@Y}DT1\.7XbAg~J(:TLϙSA:^RudԮ {۝yOp/:7}S0M-lô3/z>m6Fck|ur6"s c0tv8I׵]~[$+X鯇-&ߒa_w\zǑnV`LHB^FTam[uvӄPb":(O%iLŏ nUH+ѽeKNXApDz{~T^T.N:TL$1KO.թ&|8Mc"k#dE^vAnz?%~'/{3M} }?1zM26n@+Fg3rg<;GnvXopQKO7S|27Τn2jHp O+cͩRH{2QMl*PCT IYIFVyrHmsApêƁv.Pc"%(pBS+!ײ ч[$ۃFJQut^&L5*>XHKz5ۈ,$jl˂.D.nbvF)PpJ K:ìn\ָ<9}ZZ<_8+7Q2U2khLs, PCDJ&[!M 6η 44B-c+ .= hhX,Y 8aKX+A6RVu!u)(&Um%"xP;%z1P iS_h4 U[Ã#e48 '5jC ډO?Ԋt<}c:1ᨱ3 br/)ᙓ&c;FFJ}uYJ ð `GI"=c6{tae M=0|dd 'P\a![c&T Z`>3GkOn҂")$F[2cVZ& S[7g59$dR͊RP7՘PBt;<89dKD$A(O$Q*kŅDT !\DdJ5jX BD'e!Y 1n@VrB*#}O|+bs=biO8 +p͐lhd $ZW7*oLY"˺hzFDO;yXa͜o,>+;rs+p>._Io^C[w3QR*DGWPkBݖ-; p!B;=|)rr OL_J!N{C _͒l{ U*99ːVxXXOh^\ r R5)8qE [/}9ؼ?Xc'D]kd[9ddGC+KGUFkbM DQO)cDmU |xwuc%% b|Ċd}!x_z6/KvN\cǜm+EG%9r6*BTYr|dE=qwSWx Ah/O#k-̄v63ѠbYp{Z%n=:jA[d9W8ޘą-h=#lP ām .h`ݳ.p>ﯵ ق@_MNjg(VwsX_]sB:Gp"{`Ӻ,]">8k`C/+di"T0c| ɯHȿ|[9k0۟%n:sl'=a9]tӕ_NW崸lOs"JM5S2MmH /6a;! `x!IԒ␜{3Qvc+T?U]V0bP, 4 z~H^85-} fS.X&lm yqT~^ݹ,}D%SdGB&jJ⥆3y҆ eV!)9汲L Z ʵ#h"Z4h)"nі(@-θX*,JYhjs Ze?m]7ՆU\9Fu4sLH=nA:.E'itoaQ7L-UgpC !G̵l\R,`;:\ZL84h,lJcm9??9uB|` yG#p$}\>"0節s SR2ƽF0Y| ֢p fHcp&qY/ ] ׅDDpT()hBJ1ǹsW1E7qg`#1Q{K4a p:0azt *0ƆAseAp4ODC/RKl^,A Yw ko1٩8 Dd"tr?/qo.JnقWgL0̽zEۏ|Oh~jvS;m~V4{Xz݋/{.FX{Zu@iî#\y p $wuah,'v&JcVpp 6P``Ic3/jdUZy}IIºAO~]3qX~21Êx gٓzڀ0^!]׳ X䳾gϦgRrbVS?[iB 94j{z2) nͲ_ֲXy4-/e'@#S޺+k (Fyմ~q]JfѨA+%jvۊ,v$|+w G4!0OpcD`Rz~0EcA+f@LZj6γӖwj#+8 V[hyA"Z/Z`g OJzdz/=[`B,qb)޿w{7\n,X+h /r~h+3ޠDpjonj3C?[\{Hr5FLwaJµj8X@ 1R; or!z43)32sH џqW֓mӰ}Պ=m۵܁m7݁>-KnZi S42mk1v(%9σײ0@Gm47_3uG;*J$e7YvA0A|Kw{[mMrL^b[hHH:$jE8QH!j:kdZP,Gšf%5^uR wӮ昀 =WD0 2z0q5\,HYܵNP΁(+6KQ S;ʓ v=e9a {wBL0U9R͟WOv'+4` )&:z>qmp q8kMuh{9 kUpiBו@܋]SeI]IoTY%=O6nIipV B a l 8:?4Q\`VpKXG .h@AZ$)=;es 1p0C$4 / TUb2r5&8 ZP %&F-A0y 07O1+à#V;3L%DJy}(J)C dg1ȣXN% ?WFÄ;|9. ՅZ~GXA^졄4Vs$ 3"AKM%x b`gAtԱ .١+0![?%O2v<=V ^}cٻ/]@vqa1~ krdt_+'#PR ype}MeuF]ԕ};F\\a9@*%ꠥ5SxqU.N֗ Nߗ?L:;鮮l8GRI?.VՐc18Y%/m2Rb(o Azj?~?f>|qϭՏmA ѐ"&vD UC<1$ /*|).sA># "N(cҠ5u:p@oKhf`@GFZ5;M̐4#of- TLiK·\if'CBNw(jOMO;o[WP7C^vlxm~j}tW袾=!ԧhφ߰`d"+F+hN߻կOP-QcT9].7ݠmIV 4)|=, s@n`"ۃXx!jwjog h3`tҘ yܮu,TY ީ0M&]($2ƶ S{RwWן~77I%7<;mZSyy:E SLqmH2j-W11Hv]Xjv :o5y `.MnWA˵W/n&9"ǚPc!xw c=n+b3` oΩ_n&;֍ey^a i{^FZr~y2%Sl {M5;ӌh [ ę7:Q5q@qہXJo%8B&[iqF9ppߟ܀I9㖾$ ;]pa$;&Uoy'8y:yn?HC"X[ [+2+`@XzM;7-} fb1/kN}LޗdBVxϋ]0*9YݹC}DeS$pDtP4 5KUH:Z39.ځ~Id8'̧Ƹ7ah i(WN=0,BuH``I<1Z1e .}tr[iY132N[2D؍ _[syuw}?0{Gy8GV}3tHruyD_&O-Y$G_V߂5 2c ւ+=W[*1 CiG-Y:)H#E[AP\p3$NexpBty..>ם+b5kd;\cD %y[30bF;a:a O. D5ʜJ t#kV5E:Ɛ`X{ ` @fVVɚ xB}U0Ɛ`K %s"aCoP: d ȥi@"Xk%BQqr;Xh-7 جD? },iȆAy)%H|ܖ=eՐ^P&l\i,NJܣrte<4H!$u\>s~0~[LWL[o#mscf٩A+x24Xϟc)nƘ!R3 w+r:;z7_[Ұ>|A&||]GsD!@ߥx촱 AMM}&OR㦚<, +QB*o6 jjm4$ r*OUp׭W`UmQ,(Ә; 06\52 <(%qBCBmTH)CVo{Ѯ]WstPoC^1ECxs+^&k6%&(4I FiS"+nڢu;xxڛb!hn#ov:?95T@5N #X /q 6J+\ Gm`eG!HUo 1a:(Z;@F+1)cE(Oa`T& &&~F|%\Z`jnA``<.`?&e 1j!zgOoq:;ComQuA~K'azT1M @4]@+&RL̋L.iZLdb G)JF0&L0&J,4^/?dH|:6a $%oD,WI̘$L B8BάxUː:DvW[HsʫB'OGJSE 3rkJ"x _,MTLD}ʊ#Ng騺<I\ǯT6M0t;}2,8s3vRXɂxcz_1<춆ecbq g6ˑ$E%[jKj-/503Vb,V'E(,}j1=.)0D*9t6xhQ ث9 ~O+RC3K)}ZP&l]Z)zV t!*_wy٧":*#V]VUʛmGUެU]]_c\Y0fpkf%cP(ڌ}XK^Їv6կor4AFS7hz+:^G6ktļ:7xtzMOa U82qM)"Gnբ/zT bL'mۀg A"[ y&zMw].M s^6*g(3!}͑dlo ,g }11'K {-mfr| Aݱ:^VܻK0FU$wLPx /8ZÄRf;s>L#_N(Pf-\)co)Ͻ!͵Rk3cDdIC|EA*QɝӤ\+nj!p- Vdɥo ൢǞ!H@@R wa,0edVq5Hi j-_αfcIşi3 ܠ|{+@QˬD6'n>f5viL%|Nq;jh%Yl=vQ7>[s?b)hDl>> ΰ?u$j5B-nv=s#ȧ~wɢ@nxƘ!sS ]WuٌRᒁ௟NQ/!?3LaH ǜaӘ BgJ?(T%1IZ:d)!kK4HRR (%Bujӥ:9tYpX,hIKM j*8$ s2Q*]n>D mXD҉DW9Tp-E1`z}uq2*u=LE`ԩﲯ$Q{TgmLыLSh>q97Jri$* l"xl-g]SD ;;ZzWS'r'+R& `ɓ"%Sc)0\T\P-júnwޕ"y y&dSt}V/лbc:h݆ *UMz#HB޸&?9n`wK tRƻ xtxo-Swa!oDl?XM̐kD &gZ # :-9s)vB޸lS9 [xgvED2[@!JJ P7z<։ v{x95>O(@gDĂ(ԏ/Oӫp%xu Y?Ԩj2"o8ZE9}~B8OjH #һw-boxƸ!Q5Zuo~.e5%z.%uYDaxb%W$mmw7sOwoiwນͧ ᕕy)_Ylr(e /sPH"MA!X!{u}{x]AgC_2e Z|x bfdzqSܾ˛Oӻs-kUOvoj&̨06G|X8{E(b9jqܡS^DCy9gЗbC@v3{SZY)-̰җťҔp%#ђp }#薼qH z"q7w-$vSKMuXN`{&;V+ I}i b! U VRBR+e))s{d2_[[U#pL#%SX8ŰC)7 VpnR`,lu0-^*p#Ղ j$e)!m@oEbR]LjPg/z^Nbm/A{3[,*t 2E0(hF)'{2]C3F 9O<GG$"6c6,;-MMbj3x73Ƞ.b9ܑo>b3ۮג촑mG"#[MDl;?Ŷe:vSRPeuv.]y;@yLPޟ&__ݻ*Fn02V#Tz ʭ[W 1O}rQ!W1{ޞ>]YOYCr =C2{fo_,ioBk0Sp$$sS8I־KLg TXWbm7|dw=z^ ^z;xB\3Pattѷ'(yIKő( ҄9i+4B8VYz,<-Q`'EY*Hoz?ZbD1-$ B\J>%.h&Bd> j*(\#RV*OPm^Jʀ-(DZHCC g+9"C6uYӧXxbVKͨ*Jtqg1 hlAAy EiXF9i䥔95DQA/f|=rx4綞F@ǩY9f~h;Euȟߖ?̣gFKbs^ ʑꬹexBYI2ίK=ǽczt_gˎ^@^/1& t/ll< o&#ݵ!. 0hWHr?8ƘvנhiN</(ZϿr k= M˛Y)hvKP+BV5`2TpTо3:Kwo7_1npcxE33ec>C/O'@zn38v^2 1Jz8hօ 3 [/ R L-fX8W"acw%g *Lڔi8XITD)3$҂I p$ ȇ QJ̲{0qJvPD9d;ğݻ`8۱0^b=i4L=D{a2kܩCH/֑aTa}(֧vy(GwE{Yq^WttcF:lӻw*cVyq{VyQ7[=X+&zK^(e6Rnza< it M:ϯMټJ __cJZ_3{A*k֒xA繽߯Ӡп}V-WH}Z>T73X6],Gd?,s#ُ.Y/JW%]|WK+E>A>w.NYV޵wKOkn9,䍛6%@(Sc(EHaBRW?nbeAO-u60l[[:|],qWou3bu4⪪uSZcсTBh?HujgW/%Zbrf\'\#&.z/| cٖaKI.`Ea7PAA k3r9,jvtY5ED"̰ .{5@ZN&^cȡ`"2孊 ڔBd BJubg9)B钆D Ts9 N-5iKB) sԘ0 /J tz)87(I)P/{W6쿊?Zr߇ 6 reahBtE2[ݤ$J2+ GdC@j#/>Յ9i2e֓֞&]yQ\A vgؘl|K'_<# eA[~;$KzxyIP\b LbH1V8R)c8B (-Y߸_ yq <K*F(a)]Why~ byKI 7>~uܼ<SuOaIҩY͐^-V"6nG~xq[{b\GrQ2YBBrO8-L; (!('%D$IHJ Xi1 P"yRf(h鴔۟%1`j} k?)ź%-:nvjZJZJBaХ~1jTyJji)躒d!mQ -1IfnS`FIqa4H c]gl pe=@_,-j6Wֻ|ܸy74{`#eGdC)҅DFE.@^9\gOE{Z>ffar5QE ƍݍÉ-uK nݸ906ҋdפ >wnzclI(oF)EjN|ZhgQ.C䠢a_hiI`l]gxzTBJYOr?saPBxĉi2Af`x &Qa|i%J_<~9&^:p'h~@DNa^\?ާ;$y?{a ;*W!o#+͢%aMwFشmg.&ȮgeM% B8Ƈcrq?׋!kq5A |Qpoif ݒ=jgB9z<`/>~X0#ӵ ZVj~ޙ5_Kx5| -}ΊY8낫ޫ^|5G|` djUmC]#^ظk/z#5^a@41aZ A\;&s*ƺ^,1&\.5X*_*!&)8Y 2GC-@Wӷ#O+UFDǘB$ ER,Ho:&2֛ 0B0-3]LҪrFdS@ux~ٍv D? }Re,0b'[)Ub'j`?~ܒH‰o&o=WF&uGϪ7A?2)GTHʕB sb^-/z;CeB|bvշӢK:n:}lF|l߬fvkUt;c դvRL OK t;?}{%ͨ4㽉߾'__-)iֱIE@lS(A`0n {V> /__tgoM!{௣d8}`98 ǃ"QhB;Ymf̪;\+t<)Ũ7-^}7fc}~ g?W0x3~z猔o߀oIkrV&I?;A㼡𵫞M XgU7Y6+lAw37F, cZFP6`g3oHɗsg?ܶl:hww x"s$?Jx=X3| 4]XDcnl$PA^Xzte8jg?~a: {n;ўB *%^;ɶkA`'Q 葞t˖yh5&Q)h)PN dĊ 1G4HBCN 5]AlsGw4=.Sȕnēl:׬ń&Xpfߒۗ/p9)ӜeCfd܏0@M-g5>moh=8S w7zpd+OA LVf)|v)860p2x7ólV= QM$<I jП»B⣖V N'L7Q}L5c:$x ]tKyR=dTk]>c)|tk͈._s󪻟HظLw4=ި #LWЬǨ_{3!5Y q6   myt0 }޶hj8IKe|g"mJyR^[z9IT]֓M}VHK5`lD)(U'"$䞓k2vQ" S_\žî9'A<;AZXpquIbPG}(1P !FKƘH( Q!5P&IJRQ'm@4I`H)QTi,Tp!h@Lj!CALĐ.+`^UQfB1Loï@{Vͷ iBRalJ[JbA@Vl| Uhשhg%UщܜE] ybXgO^m1nYZ_5fkT# /;{'%(Xb*3`Ǐ'gwO*&Ŗ"Ň޻[% mՐg=I|GX9A%*,Yqg{^~v+x=QwCe Zd_笟V7˯o5t'~ۚ_\6ډt'вWlߜBj26qtJ+x'l\lup-~۲L)8f{>& vk}ӟ ejh,z2v$:) 4G5w{[1BV0Kufwy:3͓t+/KK${Ws hp٠xyv ě NPD%[׉oo3!܉L)gyj Pac0>V򆍸3*Ն c<3Qu{KJM %_"m&Ip XҸ 3n,& 98j ܺOFv';\;c优Sx32~̦TJkϛ9uTx=4O1oNûjߎ:@X_ȟ ﬠz(j109FI$H)tD2"J5FD0EI*BBőU3ՉZTk;1LJvN<75 '9k!A«:q65A&'"'p [k웁iǂ+i6RZvʿpW=VP ϖ<gda!?>.8I/; Q.z`=?;~x^ |X7ǫ݉,S"ݯV]_!uN:=6rޚtgT&#pp2o`cvcH?;587pkgN݁DQQMD<5 B )kvv(3QQf'Q6Q}L5>5 48jiƐYy ="-[\DЊ[prP#%t%`l< SGa xQpjDUЭDDϧZr9.e|iqKY8ӳgpfT6Nvx|̩q޹ik:\Ѩv]hiWW7^q&K{|uJi-:@xl':z#5st9 .9C7R>VX-S* :&Ss7%Tヲs[[Rf"/'gd+'3R2o&t1cp5k:edTTi*ք`v+#TW^ јNCC/k-;8_&:[K--_жYPTsu7$r][@,TXxgjʄT` k pB\@&$}] @pym(e)bMv>8F$?)ͨRzRJRIpyRJfTzkϙK)~Rʜ]* F1 ?)uT+QRirga~v)K37^6Km;7OJ7S}L5#R"XR"Q͊!ZJQJRj&/z:R'BrԌ=g|G5CRzRJ=wZ\"VH2yZ鏍E; qH5ϟ?|Xn=mq~mbN7I!/;3nWNyJ2Afy 5֮le$9xnI>}sd$H`bUHop~=5w7/}\/7o ,%pKcɛmO}}S+@(!죻X?\^`Y_ء[&M' (Ǽ2cٻqj[ d*9e*@R!:\FIIh+'E.TQlK|T'dg#9D!_جs&L=KOhs@#Nb cqVVæLEvLNU:L BX>9fy񌬡4@Kj6t^cR)^5 vɑ`'&,B;p'ˁP7%J81q% l!*ٟ:-Gf]_9k.A,o./H?.k2LbA˟_T?uj! LDVVd= $92q鹍\>1W1k2,p{O 8Cg]ҋ:ݮyv6mc;^JKXqDY0ReM"+ Ar.A6"se sJ(BCH*0ӳm*=7XQx*ܥ8i(^^)_'RP [e'gHU]>jL}|.mbR@cGvu2ÌMLxn282> AК@ݮYRYF+ۤ_@݋x R#(/b.zO+-"8ـ(lJbl>Km IK'|Xc1"7gڣ}X r)ώ%W +%Zd4ce(T-%g)õ@uX+œa%Fu9֝!P켯9J$ kzz)5dW+.XO.dY#A+rv`M57S`e/%|RP'SXepN?]Уk!t:JTƷQءZCr hA*Wcy *48P$ *NC~9SݑE G5[gcG8AI\l߳TmnS\=l]zx&Fkiہ 4_?=1 l?Yr%0ee[A$m2q`֬ު5b,yJ%''[~OmY>nRhR{RS[iĠU)ݤ<[ *ΔibTqI}C7lI-*. {[v17e幥TVwQ\Qh80#R93˥ª3( TrJ} QR[)YOdBPAД(Y!!7Y!RBϨZj~Uҿ5R疌1I5 6\Gcsp;MIa!xn5w)!%^˳A] ?ܔSè5JDbljlF7ͦ-긇sK&(5KH׷*Qfz\ﶄU#`Yr_4*f?$$$#s4Ece v€Yش}1a]r$c3ޅ:OA fP-yjG܎*Z J2rČ,M6`f0 0g01DwԨ+ըȁ|jT &)! gR IgfNGKmT=JOέ {qd(iP~Ň_bͯǝ0 !G/{I!\74JFx<"ESy8@z vt疨l9: >q,gMjDsi:(GD[:s~c-yyyyJ7QIts{2lʻLjWOzU%Q"lu9g!eU0Z{O~.9vL>DC^)O>SF7PԸoDU_Ŧfnn+]tO ~nt$PԸoDc"$tvKnUF1<䕻hoҖ[붞kyÁ{^z*Kj{jeK؍m(Rkh7Ѐ! Z$[AiGϩ*=6˧32+L"DÜ}BHUpˬRVT7&QĽן뿳a.__n,vyYN J4\rb3an{;|;FmHĹQkKzpĹQyZ6N44D[k̅Ul%M$T9p)b8zN)kӎ*VKU/ dGNzGUȹΡ\f*)ew;$ntQ)> r_Be>.K,HQsKMzbeRD F=Z4 !BǤlz2@jfM_|ɐȵG_$NETAyfJG)-#!c4ؘJ1-Y);YW!,*/ua޶<#J%ْ}5Y{$CLP2'4@l)BnrBsfBwhi aEIhڂ}R#HĬ$W1NydR< -Hp$3is6s_[2C(`lzv[Ļw yma lCc$ʧ 䡘y&,c1&Vӌ&?=&Ǟ%g18~-G' `>=N&1b"01߶"6W0 م0-wfAKesa܋U}H] 5~y7) tsȰ1`?hU:`. ??c>ZeF 86XH`bhL!TSay9\Knh24r rɀ ӑ .,]-?(x?󐳌2&d*QdY)YfOQȥhV2/C!*Y)yhTPVu.h-w< .5J5 :Cy5t/@"{ٯ4ll&կsdzSNr|P6Gʻ{Or w1e۫ͪ/w%S/dӗ4:'2އ gƲ1P Y0kS)<\(RMRP\ 4phS/ hFRv) {!)< 'P!(s9we)xIT9gR_N\C_L"2IbTp @G`i.R,S[(%R=R2QG]R\-=VX)wK%RNn-XLɽRrlX5'c0ȕ{v 9&W 8B0%OAV@*饔|fZ.GS=.0ib Cj~$8돰[_۾;DBvqjO je;y2x3q4Ya6 ЮX?=1 @E%Z-Ol ϊp8ag-ZP[Co4B0~ e2%Hfߑ);S4cכtȻxN#A_70ѦC=oLvR[FO'V 7VELM0ebpIL n[j+$s7ZdUM% Ovx T';rSZ|ލnp6T53-ٔt22!E'_qnG7 SnCuP:Cݎ v`n{+]Oi˺sɺ&kްlҜ+ R-9 v`ay{q:GZ-ޥpzNJ0/rEC/;-cG)0dXpQoVۿӾ'})4A&1Hlǿ~(J.vH%aR$sUw3iX@K,/-VI=?%i!Ƥ2`AdH)q,Jp}+p/cOgScw_ќD;^?gU,.wGpn8-rm{ĩOw[3ğw-zm{%;~{Օ(ٕDFm)Dm1 }`\ow[8?x.g*._w sX4D1uh8g- YCbf"p.wZ]83qѶppcY9 /mQ=o l5 .$,zLZ"XGYC<9y w; L+ Čl}`+Ei9+{O\ZkԾ[zH`:,8oN& 4!q֯KZOѠS~)@)|w{nf k1RU0Hڗ~ Quc-گ<Uk]Q*ΪuN"u6IFFね}&0ֿR 5&O hlˊ/U,Q$T~# iσ> &9Ķ :3n!o]<\W#;ts\Elk,T §XGxFQ,#ZAn7xx*\AW$5'uL;ZU%mK m~uiaÑ#&ϓ"V]?1cS8B~;Ă"Gݘ"ŕfyӐsyp^V8GOKTȇc[r_{AkjݣƐA@r4՜+XRrfEHᦈ jL}c0x5w?$XL} Os&}$eE9{?I]2|Uvh.J #5zHj;llR,socSkBSYd-t3[]uV FBNJ-B^6u3^\{CvCۏLsvn#!0:kRÛ6`z8p0 ĦU7e*LYs)7P& c?pmWO  q͜qI>}XFך)=E@vMc#GS!7\k4};"6jEwh qۈݨN` q $@!2 Bdda^ЭFlJM^] c[L P煗hU|yрNtSj3g !ttC ̷=r#;M$u%1Dd_"cGӳ ~r;X!2 MQQ~`³`N탟5~&%W~lQJ'b>3XP^Nfm Фa1[*S3+aTW0oΓr;qy8)r`:7uNI},i`Uk7}U xψi] bzǞH g^Td=)#xp(Zxݢblm1LĞ[JE||^.Zۗ{Hw+C,i=*wB %!j4uU&fWq0M$1E ZקAv5(/ x/0t@f2Nx:!do2AntmHECr&sJAH}@DߵDEQ&˗ׅo$zo ﮗUܮ69SN*l!+?.ȏ##A{s>!q8x4{84oqv!+"Sjz D8So.W=jŤ%΀1vgVd)&ܧ~soIm`&8:Y842JwTB FeW ?x lS_1i"!Ђ}Sٺ*k#ID۠΅"m? tOutе+] j LMcuЯ^30 %R\3tS+|I+L{ںk^YO-)*'p ksd(>/Br&7mG/zu4JhZ/Ѱ8cLk֒Tw "*՟(WeS[N3['I_>]ϿK,3|>{:]7Ct)sgɣ?_:\`]~z!Ρz A-$R+~ s(}@z@2u'XR oƛ5zw7||G7QP'l\PqoP.FZmSΕzLbqy -ִ`!a4n .V}fdů֩l Q4g!cBbM]2x<)FH!>b6aE(<ЯI2ΞnP^qi(^d,0H)WqV7뗏+qó,|Oh,}χҢ*֡'4c ISbD D(8 @5([ ra ]eabS=,M݅UՁ,H3a%RIIF?v&d6$BzaC:NOHcqn6A1lFLV`o`Rȁ#wA'@&4TTHM~`Ck/x$ԡd)-$a<(i*/ $r ~) oU+mt%4GG$6 7F% F8>'LS b2UBBcG 4#۠bT%k3%IuwǡB4JGZ2AQ,"\Ȑ?!.54H8$xe+PFGhldu( $ S˜O&no&\wzl7 !(n~˘ǘQIV!"R.6ظ gZzq8:~f!%]ADž&c2,4wR5oy,jWF]mdH(qfcɘ /y(MX$QnwYvNEHmrPqU_!TFAk % Mt"#0MP4R-'[͛Mł!=3dX J[5&Z?b?{ߺ*]d.KzI!5mpa+Nc?6P]d̕qY+wTdbex60f3z\;OPŚ;bFK H/nr1ev.PYh)`i3 \l:mI%21ؾ]C4NS4Grbv=GySџ2s[<.P]^ӯ&u4.@CS]jep8#ɽe nuIlIch;?cO1Khy=E3Z;r$d;_3)-w;P5h h"X$ף&= LpkkK4x7첛*N!/4/]GN;gOiLo)V5rw;ľy98ْuf5% 4=4.rݽrOr`#a##t5lv@Z7v} w0w hQǣs dS}IאF9|(28KyH$<Ltm$*zoxk^vmc9۠{r^{U@Sٰ3"vWq`rGl1ЬJ{ϑ[2L٧Mp1WM5b`bU0nii~hϏqi{ꘚu=*{]W&v}9==h>}>: WUc!۲2舦+dv5= 1=E|+h⦓sAQp!lf:<|):zZKo s0{`^f2T)*ʖ=A8#rŒ,ֹ֪ooƽmLmyL54ú%׼Y ).>2̻ޓݟ75w&! >FDž mDq^+o"ق١wo#5jC> zTliBqWc vA#{l`q"2V!2|̏I.MT ̵ )!s4cȍUD=zL7iC!\o54]Dyk%j!6 `$8J]^C=?KӫIK~a}] QNs݃k%k<>= F?msRɝw@O %o{<^~MfKYeputF܋WNgE/{D [m%[۠ow@lmfVk 2&jU3y96ݲI0ր>C޸1f逳=?Z*Tب>b]v,vWմ0/h=d3e7dҦgt'{GK|5pb Oj!}MԝP s"VJX2VSt<ތ #ݮTt~f#~M;hBv +@` A-#:)w6Rc/>n#OΎ%F[WGAu8 ܺVDGȭeWZB FfMwd`YYB^QCY3*s@8Kns؁\oJ oh }Q?Ϯ^ ׾luۢZS j^yպS3>a8^C}|2쑁j^Xn.hWP⾵_֏鞦>_տ]Hm=WSjdD">Do8d BNKΐ62@KBuiͼL-gU]%I3=C"ȞOoLt:c뛓g&qZ%@_hјԵ4}o( ݆~5V뛠 VoqujS8~݈ꎥƽbA?̟@ Z<cevJ)L oGrOYt?VW(|O9&XX)V<t3׳#iz@ n>^p`+ޗvЌ^-Fx6 ,ur@&;.3ժI ~1#֍\Sn is9;Zh;q+9qo9rh Ewf=hD>O V망ϱ,b:;\:Hs-B{Qz;J&A1mn;|̡#L^.oC $wd#8`p@;)o7k0DSWff360AijGy-thGp?wC۲3u*8|V%:>Nb~}3qsZv@XͺN(%{?ћ`YX -ț@l{[lg[H⍎ 4֔q /οhwC7sעǭ+^>@->@['zeX[ۣ[dTzvV+bt@#[bgɚL;N\6N 0ז'cuf#~#1t찚M T(Ď4D}O[!;zI5%AHv=IU~C&pNwYdgll?c[FNۊqr2I@ٔ57c) vJqOAt | iqC ly圥KT?kui3 䉌oE Vp^iQwGZv G39r߇!*#"NI˧"Y{E`Jt|fE? -k}EYWMnÉܱHwh9 w5$k3b@qy&-CF+aDutR 2\0+ȆΕ^g¨סO ;4.p<@pI[ژ5zyvOQL)L@EcMZp󰴂$+04MZ d2h}Ϥw>?޶w;sN)#m1J O}{|Y&,;K„:6)+Z@Di |D:cW`н |c !>/؜M]r.g,9#vy9f}2ǰNbb" -jIfqԞ^GS;FJyM@?d٭IHƔR*gߎQ5_yiPi*u{WвVVJv^hr.j&Py vV;"OVl,SY~^s"?h2/tG39I6]ci'=%xHM } i(fC<@ \6F)Xt!(N3-ٞL;^YE P<nKQ U`nT[Rh+Hϴ 0EP45X<ܖx9Gr߻O{gꞞUc`#EDrno.=z~}zU>8iQ P- #%j'XYEצ 4n9AZBwͫÐ!qd> Q[nq7}aiMba&Ң&5{_!c۷ʕ4bb6[RfȽ?f8PM=?t v%eaxc _,_f!D6/`]lXiP @B@=.w]DKSJY"R|/gt6-I#h+0,x"\ɞyp-TZv'OCʥt9N,:(ڬu 7a伇רxg}:k\>u,Lprpc@)N2m|{%}RZ,p U_f/A>Z~la{ZkoqڈroAnC3uW, @M9}u;pst cnOuY>jު ɭ\2=G"US=Izye„?2[[$ N/TzZUz )k=ZMot=d*ֆcq=Έd0 y`fr2}=]KF\K`v;n6ofב;}܌'nlM/W[8 F9Q,V-o^e/Caxh**{e=6 owexP_gH[u tjY]ܞ)wF8ǫM ^8_fY@ H ʬy1&Z ngm[M {ܶ?>cOWG+.D]juGwY͛mHi8HMuHw+cHp&RpȺak!Mu9KcUt雫f 췟E^ ii"nQ t~AbFȘMʼ;_*c6`! R*S>T*& fN8.de,ecJ75bߵ_7o8dKo=^~u.7WC@Z3EK&WWâ_܍y*oz&5oܐl((ZA.xb+}UbZ 7v{||wR)j{ 4fݕ,R.R9}@~sBCK'̴RÀڋ`۩X]OΆ 9WA|SڪDnq&0n;@floTJOp0G,! çfU[cJ^i57FW5OIΥv-nsq7rO1hMrۭ뵺E<l A,ɯ>Jk@XŮLǏ|L%MuT@"˩(JF h}6 <.-tbH~t~tv_n:?zq)9wWFӎsȌQ'b{ekLgv6?|8҈ \N/>n >.[3P &h%̺#n^Zvbyd_8wG!q+F_m؄Jmljq~``Y [=9) Y?&Sj0d듢(8B:BQ]5pV[OV@OlǛ,-wOgHCg-j*,tysH"5d kIqW[b")&eXWM Lp1ɞ{=9;jK͖|0(m?mI2Ae5aDq{8}zqAKzwrwV9\%~,v NJ&5XY.Y Ma0LޔIն]^mPpۜJ7|s enߧ !ۜ{Y+rji?}fO%wϏۍyݔv_t Le- A+! YiW)Um<M`}-ejڧЏA^zxY6Ѧsbm|AmGdB.?kK>,*d9VH뱽wevXCʗg}ZU5 @$u <#߾9oa!,ĕ=vy Cv&4*GLVj9hԧ)t H' h60s_J$sCV 4`V[)Iq4oCG8~Pew,c_@ݫLƝme*6su]5IYN+pôS|}+~yYz||5YmLfism2Jm5V8?1,^FL6gd{z:ҙ}',S4N,}';Pݽg<+\9d8` z{*{EZ m LN4`iFw蓙( ?}85,y<{) ѧn4̼!iIXE)ݼ47JzH)]sfm6I FkzN6*bnUKՆx20.KM |Y㉩M(E4kGs@UZ%!RϊGڦaגݺwe,&y'Cmj G'Ȓ-ANvߜ?l9Bتf9w=TĠUlZX fSӡa? zS(G0., x(v;OۍVVxQfln<ۇqC]kꦼ:OB<):OwN,=.QoUvC*  ) N_,/H`+ci4!] K\ (|^8ւ؁]sh dGgDN^P cy BOmeحm]=ju ! N. tww|r耐qQ֣/eO;#؋.*?oԯhSɧ:[l%:jsTFmbM߀aEWmny߮o<};;)ݫ77֬t<]f'WtP e1#vKʠH$|vmCO] I X%8zX1ֈL6Oֹ5MJX M:aO8\Tx1>|MZ }Gh"*^m[b2ETle JAFbXMzY߁V-BxnӦ8F;PFzQ9Rof43jcRQ:tS[:iexR2qxxJ}xxe_'@qw7r6崅bd8SthJZ~pc.Y:,}٤_7OjYےv u.(mQ5ƄG~f^ ͯ ﮮ猠YH'\7FE1Ƨ8Un¿Nr;W/+90Zi$m)4W}Y X. ǚfuhlih[mdl79d6Hs VI=xnaH[zRd(_UET]< 6߲ ԫFG\@*ޗ}LDN;Lk,Z2 38-0b,a6 aGy-Hi;e_f<<ܞsh R6%NԆJ#ZX^λ Mw,i1x ̫öTU 0*3Pz ?k(JRyLi;-MTPæ?5pfm K* ȉ(l7ֶ86ADcgSn>^C0F!9HЖ"^/یGn<l5 Ig3K͵$kPc,2rǰu غXa^f'/ΩkQYCZHl\n4?7߼r=#Z{qNb)WzˬvtǰUQ^͋F[1.D9"3m%M{TN#rǰ$JXg)| ⬕5[Wg38[z?_^}Y1d%hrkh7Z XV+1:/x#P ) gtd?1 ^]uwbYzЩs܃YX8Xf<\OB]Aa[xB6@2zH{(@O)WY-9~9N qkH:2ipZ  eIcK͓ 1svNي3OuQkDy@Uw csvEIiLkt<0t8$ݫ3|w?}-цĔ$&{.oβ",l[ 1" qi7f;{.!ھJLFb`L 2̭ {cM[Gگ;뗟c1׍Q#xY25$'3<#Nۮ}BY>ļy".]MVm!9F4g2uA$ʾbM%vD'3Vzy{wd)Ly!+^E\vz֊~pCDtCS6\^EW;ʩPiGAD/" f'ُ?p8{3RL7lxe~z`f>_g(&»MN{1_e fMVȰVVa5<k_sj7;᩽mJG'QG2 :d\lo!:AF2W/f"ʧ/| ~Yr>OhKv~`[) WWCЬ_j:>I`m.fK<ФeEJLlMYu!SR2um܍MfP6.F/wKm&ص{#o!dn.I.7sSt[9O6%aոFX20ǿ!)șxXrVҰF ' n ?A2,r(b>380. h={;,f2Q6jV'P-u$a3QnϻO~RiL˟|z*EJ~aكeIZ$d$>٥ᗁv3ju1I$fȒ,Ɏ*OWuT#LnD'Om0 9O~ѣ&d@dc8~&5;[糾?@=iu )T2`yBqZ#*=ܻ~vk Lq{ b5L:tQ(lVT.a!z*ٿAw.\?I_:% j 1tR7at p>Ib3zL j U25lf 8m5Ć4!Z:}_alٵu͐Dx2}viQ"LɃܨyNZN+$-E&e*#%@h0pS~t/:6 ~x=MpZ4"D5yMTޡdA$ blӵv;AETq<(z57ËpC6U<ڜsu|8?^BQ)/m%=`"رG{WVJW\ >zRZm-zSפ=$_q^HfkKxbDf $զ#ct|c >HD D yu2H=E UPg nEJ^H gCr.'?~<-5  zt:WuN:Zկ}Z xHygCpc/*{6;:^HӰ`iiy{sW#J RA&wP8Dnt1*]owGZ}ryϛ ")scsQtH5B2q0fLؽOiFiSՎ$d}j3ZęS2k35:eߙ>K["fʭuB-Hө~Ca.P^(Bj̫_\@ǰOe8} B2N%D y$mc=xi/\f"5-G\{T؄ͩۆ JNQ;yߐmPJ`l,=c!b}2˫3}GEtL7J6|s5. u&fJ8eߘ]U^f|CB,$ i4wٮĆThCɪ>:0ep洮Mh˂jEdkΏ|8SXb+*v͔w]@hGY7IjU+MSG4il9׈XfDak~ D]??)I&m!k.rG(a12>+Mא!oƨT..BO?:~FiMʽ!`OfcȾP! v*5WǍraeK3h#E_e0lICkg. XaR*sO7OZoO⣧lj0#ny8!ZJ#o.|beFAB;o,KmPhlḅP<ŭ<=͇ <7n! f۳gS.f$AH 譚 8Nl,5@}hqe(Q]Լ8%iqiG6-9la #*-Ix+ &HwH kAW>AQ6MtA 0TnD@r牎dLu7LCGe5؈̵rei]_ކWUUpĺ[CGl" ͕3QifQ lEd@-rjJ e)VSm&XթPP]&Ԍlf zy@avhqbsdO6Ϸ͟P&PZ+r@;Pu4 a2CHۍWDL=1%=QA>[9d|]"ZVHw*`:NK%,,$Z@20:X Ӳ=X[q/ r}x($ _.=yo}-[j\>kȍzV8T(R'TkEug.-ǪB5PUa] *Tė UO( ~uκmq58{I|"cq^O%"O0rI5/%]QBh 󱀕}B}=y4_Q"~ت"dC<$EK1Boϯ% xϼ>(Yj$]fsRT`N`7o%n&8\IuTXBjte{L[am{}]1ZA޵~sB aqZšlIdǂU"Lq!@5-<9?j?PVٷ9z;iC˾;ͧ-ِ{w=#ק&HpnK/!nPqC˳\zYh% 02Gv]AxϐD<5e)AA鮂;M(o<6S“A~!+e7VwqUΌ7 ^aͬA$s`Il6}y/8ۻǧsneS}Oyh2u:~j j#_cJ(k7!rƬ$mP2DإAiU典6_:q~0]09th~XRGWI̦,gKМ[WU9Y8#38%k{gpN6jP[OrpbfrMW_.1o F؀]z6%CLYݎ=҈ૌK˘-|c`˩9d`ێ=7?$7lsYvxԾ4dMl]GC95ZAj0S;\ ?z,$qI~{-G3ppW ( wJ$Ѧr2Jw_ޚߏkO$|5vevͫ ymi1łgH~X` ;`6-g+_sRfhcEF;HMcyh6M7-hlؕ;}~둇F'dTfNy# kMl(@9^~/$$>$B?>Jld"siJ-kՖ>ȎVn W1{RWAE^X47>;>ZĉS!s%uS 7NgfԷ>ml5TFv$0aSA2TUj8LSV-<|qTbh/] (>%"(lo (Q ϳ"9Z *|x1E#OUg׭G݇LJeokɳVU~}j@WE6sSm"/`|}Cz2jgLa}zRn}wt_P>m_; Cد8P6Օa5l->ʋ<ec`& `4Hղ**m-QH5;AUZr-agdOnQBn'|پ&i19d v 3?/,K5)df]+/,)д|Tރ9H6 +FcX_ P|5p\0U-{zh| QAf$T0[A p;*JBC>f^Zۼi;^W&SV95[+m dG%}] LcluD"nJ{ͫ%I1e1"6YzW;ɏ7ǝh8_dnF{dӤexTpBz jW*L* 3 <0sF%NZJu@7 AlQi^2_ASȠc@P=5CP%v |b ]TJE}SOJ6`ȀmXu1-є(!=AS7(EpJ؏&0U`$ KEfZTAX\6'h*-)c,/HrmG?3nq?oxܺ.sk*7[â5G@ `eiE"eՑx/)uX/[_ 6Zev 쮏’7YSGsկ_뫧IPVc Wr1m`pZt%~.n_'pǬ9|g>jV֏_4 ^QÌgTC``!Ccd5k-;(\lcQ<1 9WXOf^3Y z3"Vt*~+0Mb ЁAq_f.nS,Q ŨLhK}i  m$,,JRg\M KI`c^>eCYĜe.9HY "5 o ^@T,i:E*O0e,PvUEaj7d׆:s0RB; FA \Q,6tTW$`L|Y v՜X 86d{/b֤ E5FUu;\k Ƨpt 2 dB(':׬SLl྿`P-gA=L/^'~^-.0lo3],ʬR0)S AHϲࠈcu^4,U]TB P7+X6`:iAI /\6Zy>))dďcZ$2X1||4?j̺iǫ;$V;B[4oXMc8Ry Xpfuž5ݚV>sRxc\gΉd1R8Ͼ0U%SxTÔԀe[=9jôqE4)ǹh &dP94-bNoP@4QڈeɇnT4Xh*w]A '=:vrFc]uG2Ewcg\7' WR ՂD{t`v35BJ &7.[Y3!QF=ѨzɆR]$ `R1JC5>&yL jԴz*,um"%&{B0U6d?06 I@kBo9H^"Pp$"*'m =8] NlaTTAɂD:dLa>& dM$;@XTf?ڸ΂uw0C>O{n8(R b٥^FJm?+@.qo&i-_Эia5$gtȶԷW?^`<Ӌ¿[&|z{uK%^E)e_(m񇗭5O\9M;].ra?AyRnwwtX`UNEArE [P3z_${N`j֍jg_i8[/k?VOxM"W"˜Ç'a,_?&<8۽M4aTΗ$ yY5e&9 pܝV8P S^Y娻H_T0R|)8UBʺ?y)]v/0iONsU&_0bc.c|d85`Lye2A9/zl]qtKH)mv< \\тs` 3Xj><3IU \oS/%nQ]DzR)%SGʙ9s+2WOyM {ca9U Fikm߿zi˺*_})ʺ yz6s@Dx `hg{`F`DH.T)&h$>D8c I\1sawg~4X }Q;ynsk|-q VA4t~zREC=!vXO 4k0X,4pcMѰߋ7׊1d2DZ).2ϝ!.LG97+15TxVZZ96dC!̦m/lohoȲTq#{o (n>pF[p5,^s*6*0gKo$_Yh?sbsfou uL&@4nH9խ?6vi[  NӬ#8}HhfcVCu4(s%{A"gtlڡlFoz0t<8۴@>ؾtf^9+?|tC'0F6:[e֣W@~ d49~JdV+[7D(߼|h?}4 X톻͛xR-n XpSrujV4$zplbvn9-/XŲ7WLJl)yx`]62V^?U]A ~Q^|1I^MR@bpg|;]kJ:>/ˠ+*|- 3o617xy./s;xpqtKXJkRzw3#Ǹƽg OYT5LGk-^_9C9fc8zr;17P7GٶD?eG6LJoӜ2E]՜eRkl,ٹ*ogJ>$Fo; "z>k6Ke>k/s5Wvk6!>ej88Mo=m\٭ bc53_Wql :I"o5e`.ȹ4l Ѫ5OLe3HȵK27jȊ.BzLe )Ԛo9Fۖ17q쪾ەDK38M Jb4IL}xUYt.~je?sXhOF<r"ϘgzG_1 lc!yw1`0}Iڲ)ɖ7::d2ECDR2"8"sƵ.$c{t\R1ːqQa!cBDE},BxEMǎٴn$\l]p%Z5Q4:=./۾it 0l3@36[B˩on'nTF)U U#XbufcyE(UX/Ma }bYwpxK bD"q9V:{m98 7[¼]Pa'Sɹ-`Rm5yA??o[_[W & #d" #rK*ߧޤ;}CvC0wZ \p24)*Ž¥%aDM Ǝ||< MIPvu5Zsƍ੤Oa6u3i(XFVjݏx&kh`G't c dX`@e&6C@Y9x.};'p=r7`pUaJ>.oo-B[CsQM|ެ:;xiedz' Yc{ͭ mՋhgB4Sw k0i wdOV-34) !<6#..4~I?;q6ax4v;RX _k82+#VObC}KNHw+ȓ'c`^j-Z -*.n]Cb|^+Gԉ3}9-07LE61@>=s(w*޴Rm1>t*1# =e;qwp`K4vۢD˚9w͝&YU@}bRqUډo e@V/|(L_ asJ'FܚਲϥKQn> sWrl }n'JIP[N?V֓qQΏ [~x*qdsz?P-ٳjl!$Xb9*rٔy- sN9Q.'f&EN0-{iqR{:@ڞ.N8[|D誇!=E:Khz$'Cri4ZoWI1 {`erЂkٶe*?\xLv"zWJWc^ Qv?_|P:(w|Y++<ժ$I;ړ:}0miJ޶7Đj )1dvdE; +w)<ITqf$`-mR#KMVK^zie݊a%Nh\R$Ɏ(! oշ`49)}gtͫXP[\k/'wJLg&%=JƘosM'v~9d?U[ݿ$jaN\̓s%z9ډJDH[OG z.imp5Ew׀Sb9DdHݳøxeD3XG o6&hm&By\[JٛN%7Jq(.e eH e|vd+]'xDQq .$؀YNTb㹻[xHxtw_1wP3仔 gK.千t{^Ve`[3Eo>PV}/AEh;/RJ!YuLܓT[aB1 ٓxv|VfUd0(9!Oz*Yv}Gw [9 ΂К5r[qmO#U ?O(ɖ'W9g^[_ÆI!K }eƆϰ~FmIOK.0l 'xqEf0ۂnah=+Oo\`|զIng솱لlx  @HqѾɼ՛Zz@}v#1T %wJUBE].?Y@[ǜ:Ih+}Hl:rQ_SaFoxŌUs$|?+$QY[1+2]%p&/0'rG`L7gރΌ'x5|at 9;x6ʖ*{Nɨ̚FkyCiVq  HP2ϕj|O۹1wtܸ`uWMlqd F,T{ؤ/f}}Ug"O Xa[eJX+};{lln,p!Q #Lb$ĞW\PΘk l6lfxUNݖx$[y=LU ZIOzzq31'~p̥b!8V$TK:k'9-vki`0l)ja2Te((%SӢge4*24A'xH*Q<֏ 'PmJCiu*g2nJ)߂ 2ϳPCJXθS"=# \xu\s&FeZ晐Z#sq΁&J,! fY-mbĪ`YR'ϾԪu cDuUۑbEmUɨŸrs 3qxdRpJƫ zJji wȄd8g֖YC枹љ,'w'U( |*UKi$)ې$``B$&79( rDKu\aV+gnYʱZuB9CKj Ci}qlp{S'iB&,[E ,f_.}urDdLL q&:i5XbI(Qw';މ&6jfUN2( @t6*YH)͙Bؘ ޠJlgSX&/۱a=~4ȃ*2cϓV/Q{ol䚶8!^H._l`bp؅Jfrc<>rd\%!ZO壚f'6c48 ) wG"ʀ;ANYc4r@ !- 3D={9I|wʓK,~FD(SPhx(+82"(G􂕖eEew`_&vwmX^f\{I+xZܣ[=vnJC\ "ߵ1^H~1;%fXbaՅ˷"7#s AFy0NȐdJaNͻ/c3R#_lw))M$&tbsC3HHTRQ%>(v/h5\Ho\FEBABs1о%kCeh[}t|MC?FIըq{rP#l|;n :tr:o:QS9qț r_.[5Nk<8CH69c!=5 -sǃLU-anE2Mof'1Ib2YӺy]^^O'ňGBx&`%5M!z8? =wG^M#,dΏ#A;LI[i當݇pkx݃'k'&Ll6۱б(Tw5ttj}kw1HsAYSG.9K"ꥵ-|Bnmivh3 -:>bBV>,䉾9&d]Y hCK<w*%a F>wҲzr+f>C>/rPA0K%,e ….1 miMݽB`u+j"5nj~_41`y\~=40x~9bZYMQ ۷7VW8FieA`b'݁H cVIA$^,ZƋxgӏg)5BD?uR,>R#Z^rU)IVDH j֤#]UݏWWQnTձ-zOE\;x8e\sdA[##N4|nJ7me<  x+$tAĆ3qzQ%`W Uz-;$}מeV[yvA=ÿ6b8hh-oRzWߊ^QIV3n >Ji>:}\J: ذaw/ٹẃ|D+t:'O> 68%oZM"mg<tl50ik߽gg[$}b K?fF]`^1ݨ)O5T#[j֚܎aRe3@0(UcѲO"c0Ⱥp AF2"wסK h¨r'D\35Y{8g.ToBHI-ݧ"D7KۀόVm*垀Hsfw1&+۪A3d@ /*qH :bht;&DU; dOq!Zִt`4~<$,Y8wHaR$#jMhFaE"80 1,ؠTc`TPqXHy8Q8, ;n׺eȎ=@Xeۭfo1h(ر2q׍~_ASBh'SN kKD|2LϛpF-Ej12d:t:8"IEy3 "b}]b!>HMv艰dyAjX h͵6 y46k4\y-Z6hX+y`6) rj}Eም 0[nZ+nRy{>t#kYeu1Y,G Ԥ-5!]\:鎟^lS;#~'wF foДrX@}~.F< 7PyPwǘ!7kNf7ӕԣ__],?'A&{Nj#JBza%~oiTM/Z\opPSJڃ淳lZGRKcM$C{o0J2z}q9'(J?տh *I~gwG=]\_;Y9']>'OY'ԷVo<[쌉tKm:Y ڜ?- :LO8ŏ}Sҹ/?Z)s?XI!scs4ҽǫU!QЈP &4b&s D;i*T# A72nt1bdHL>szΔc6MuhZ7NV!R=K{sNL/̈!kD㎸O?GDMqsxr4b?k;ϓoݻ*K}* ˹i*; eVcY=*]Pru;z}Ӑkz{LzQfɓ4vB۵ݲ`FtSL1m{>],oG-g6WPL&sdy սziz~3b0a}<^_r.r-ڄTͶ+y$W8o돯'y2,ަۚ=}ZWHE1DTg8QJ(5>F$}I4YvO|)@6ugɕXo:4x;K`R+J8 m90a df)$i,26)]<$~p65f ORDDݡ%K ԃ)`.[ s^MCϥJb0NZI!7>{m6|3Ilv Y(%@JSZySR; w>EADpU'\H1)O뺼UG|.w3Z53YP WT]1Scڭ g,L8{N+7[#9tU< tyƨ4M#ŭ\i ̐bHQu.ǎi{NpcӶ2u6''nK߼i^?^^]=.} [צP g7+&٢1ʓt'݊ʙ}!R;Aw?i1ێ?YŴ1Ӏڞ#Bl +{5 HVmLJ Y:D "WƤ )G0AJnljs$Z|xnl4пmd2h ;fY8O94tr)%ZH2$PcdYE ā]C5|\1ZJ|w+qfrOvk='K H  | ͗W,J5Ki{D?}//G j cex9'mgLti|Wg蓵E6{$c ۑ喊X4 5娸TA6i;I~!%D#3 ɭFt\֓:d0F6vӮn'0-nsnȎ~xk`؋#Bί; !сe`فA=g$*hi:ievLDB-x"c40 A]i{FVu$2( {= lDv5*$wcNDdq :k Vp2:FSU(:h9lhfZVYƔawGM:%]O |vPQT2JwdX2̬.]2BG2q,) 1 o$3&2g].H x <2Hd O?cS&h(1y_ މ+7_vI|Ukٍ3ٓǛo;Ӥɦ(O&H2BȒdX`ڂI8x~ )Q`qoG `M`Պ*`Hop9fȤX e|(3d JY၁*\dz"jD X8>M.l#ٽ2dKS"ӵMc> )# 0t|ۉYo$*=999, .,/ϴi Fsm ŲSn`\B2%swloBfa0$HCac -S!k^G@ U}2R XIk9M%gw+‡l9:-XrlTDLs)L~v3YoJEix3EsS$/˦ZuC,L[}x<'=szu߫޼*fb_ ]̛"`7Csa>%S=>]ZEq*I*ZmS$I[cH eQ4)ً][4@eߩow/#(wr6_v?;ة'N=)v&Xu<<닫Ι2gHaR:S\pkvQ$W.ɪ WRWJ=^)xWj:mzR+^ʥ^%}S!L0”!L1}8vةcdSAO=AO>}.N_uz.m;u]OVcūnR`7XC!RJ=2*ȨPr(:WKh>Fi;ѵW_]85׈S_#N5.o,NH]~ɍ;$M{SFqCݿxM#tSFݱf&oIzE7Z/A)Nȩok+DJsFؽ7OFMի\ZEĸ0o?w6wb %Lw.Ɗa>r2;y U%YdYs9gTqx([KYWJΕ?·{ͦ/巻:/}o)[ޑr O=O܏U֞^ޗ,iJ% 4ֵ7UVTB%Uƌn:(j?DVQTI B(9 \ɼJj Iܘ!Ơ*ݍjצ*C;\s4^v"8NRSφll7?fdmϰIXAZKDEH-UJSSXl '3ӐFec:X @LBuFlh}^hf2Vl #ꈖ0 ΃i R A{<ȦX֨ N ݤ.ClU(Ѽ37Ie]KlKv7Q%J;(ж2\uU ,5 %ڽ1b-dUɬ'dEޏ֪*%e"y6>0>إ5*S)FI]?ʨ0G]?N?4>ɦ* ksmW9j(&M&b;XP˪n+ʮt x αCv ۽: A&5S%#I#@QQ)j2$EqEVTG׆"w&=Ud1 eCs&i9_zv2/'w>Lv_[Y_پwR߯oΌrW6u \=6^?I9-Vnwo=kndhM}z/uY/K[9]qIf!g_ʀ#lt4ӞQ{$ba FH#.hGU JT9a9fQ;$Ue?0JflSVLK1Y (&! C<gZy~mU,(p\O'v"7\XLgRF0(NqPcoّ5 \&rX'I"7)Ч}bL,@RIhr@` d{0Yz/V"35\#n3$٥a,']5\N3 I /(8.N'9\˺l(; 8lRK|p yU:` έ$ $̟ܹN>gɤ`hѺmJuW1wR$Li;6 ,J ;O~;8a ep)OxM 0``0sݮ:}b\}( EvlAMkZS®M([O)8 )͔bm櫵'M6s|a5$]?:Y(\Gf2=`o^±^P?s lfDmrN JD°]}Y7uHKepV\,Y amxZLI@ڞҭpVF A nb;Uji.8Aܦ2z \]0X/b{]Ӊ Ʉ~b눐b5) Ƴ!VFa}$`$FLqb& nѠPJ-$g ն,J%=p,EPLjqRG*I \ .с6[9-e`J`T%wUxvTOB3TR`S&Rf)[O;ʰ!!I{`dOo+6WfXE׍ 6Wp`P]]q_`-VkEf`]XXHOw(sy'eNa =B_+I3kYKCJ2YbLK^ŞRbtyŝ6b(߭*ֆNcŚIV&Վɲɶ*@xi.BfhOFi+-hn&12jE/6<u~ i@ ._&!鏃aSZ1x.Yz'hGW&[Qcd,F" 5X-ɳ^9W!=4c sﲾcn,rH73w@˛6j->j>YLQTܒ[ )Og3{J.ֈ&|ہ&{b A~S>Fbq6Њ{p>^׿i$f"@Lm 6;\VJ@@f')[8׍2H# xFl$_3[qzH#V^Jh}x813w/%9bkތ"^B?qd/[. VlWZ9T(| A9&0b3$=*N`5Md9S3zf?8\(EP0޷W- J [iظ/OEr5)i%DZPf-.Ygwخ(ETLOX(I1~'F#pkԥl ֒"2н /HE[3%`ʹ5Ɛ['+ϖ[У5.dȊ0B~YY4)5)pO6 r >er7:-:{Y!p?E%!)(qN ~[Io@ȳ 8ǰyΝ׼B\b?W+'WΖ󗣀kS?)a,?;Iu3&SVxuO[\oyY};`I@ }o0؏Oǒy]>e恊ӶT@,JKVJ3p(.W@VW*mB zdN4Q`t|~ȏ 9FfQg"wyP㨝h}\Зm86_Dh`mnpD\-(p~^<Rֆ̷6onٳS(C4'=Z#Ɏ2g{~K?`\mj0@L`F1F+C%:_ca%e_ȑ0JЙn20̠ Se!ml]IpviFH 'S+(F\o ڻl,^qǷ@+q`Wց(  1d\ ,c" [%|rkEj{Dw<&R% z8lrn_ߗB3%ܗ;y,IO7B|uuX! C)׉Jx (TJ1빋 g*Eʍ ?hCCjڑ|j"ZS!IvV'mԞ^hu*׶h[#y)L'-ȑJ My '})3#d4# &-0PFIjZ#5B'&6Y*jEYFf,x¢J58ՃNg R;:i@&.|2emE ŋᔟLL:Zv{"6Sr$K'R]1동Y:URZq"Y%[u2cVyy?3fGM{*\-@@D  ڞS\4Vـ֊q}jB|3㛙S=Ʒ[ ֆG0P}qxwlNA‘? DiaΜւeN3" ̄pvi90 >PX~O{ʣӼov~Z!P2~*@)T^h^Ϙ7jE9G?kql*LY9.,ed.き3&h2Vێa5X>]Nԅc>9Έ?VHvם|*9@haa+nv)^eQq l]HzuXey(G*;{)vjog[+TFCp]:H^:Dr? IQ]I/XTfT g> t)P3|wEF4U}m}#՚(#ϳ8+,NNpu殬7IAOm`hztcvLcz}Ol]-m7X.IYJfv*`0b#lD7J\jrsqmIk6GT.ވnيFj9:#jM߯pJ.s>7H%OʟH̶ȁY\9"|0~ wFײf˯_ :`K=f:a }V?vPrw6W[Y~|<k9ol& #ݺ4iix%4p~oh/t^B$BKƕq>TMƗ=E/>Qqulq|%TjTx)e߻մϡcvӐHMqAX'l!Y`n:墷K3 1 Lh&.s/X;p䰣䶵Y|-|+G眱΄ WgfY~e~bDZ wX$ݚ-E78*i5[r#E.gcm dhw=oˠm_@B%Fl^-'gH'PY%@ "1"T]/9&+iNΘt ~.>NU=I/w|;pl:55]ͼq\]ĎEjƮܐeVI>h}94zojuEͲ$A#̒?w&vgܜ'=Ark~fȉq:JsQNkN~+hnu,?%]%tpy.jp _6ّ-|&JU5|'ZhjrzP Y+K28E!W!CrE7rdP[@wr5x%W̱- 6.F.l{<,EocA|Vgk8 J14Z'3T.ČqUcT"WV|_>Zlb*TSE`>FM١GC/I.Îˋ,/=F0eY3j`9/1 ,Va[E'(4@tJU 7BMܲjJ28s̍ݳj՜Sb->@HN ʘJcP_RQ3+ظ,#YcnZNf &67VD( {q153Եaf/>B-YxxcYD~9ed#0& ދ[[eJ83/^y1>φ}칫%4@/jZ xVb!47^V̼љ۫y>\cyB_k {a*WWハt_-Z焱4w.dB,3 1cL9`|mŧOo3(c ϲ(-1Kbj}D:8@(3AƬקis4io}ӏ?ū*C_*H'b1MVЙȗ|Eǘu9*VvCX,kR % 䰏#BϗV@oI6wջ7'gv{ZU˳H{b7O;L!/NG!4/㕛V-e|AWV ؾHyfVEO ?'3Q qy8љ-7Zm5_ޕruz|t ?|l5/_||>}!^KfY Xܺauӛ%vRƱdj$K¦Е\Y ,a!hރH?^^^}[Y}}|No7>>nbCՖ}ݘ1&::󞆔1&1_ә`ӯ}_(4yL@яi/]tP񟎛ON^?eY}+68Q@s2x*5aq&fG"Ȥ lm_qg'Ug}{$gTԅ|fT%T5H%Uɔa *ldQ)r9@Ӡw r`lg0wg<}£<ڙ.4+W4%]U;T6TQ^Ox*Ō;--rS6W=8"8_HIQԡe޲Kg-lY]zA5Թiڴ]D/.osN@-o{$6<;2S[5WgfNH; 1}fbm9 %+.L]8ޙU0˃eP}qUVCC+ɲ z\P0667IG9|_mã!1X53Ev*ȨŖrl,v{8ᡚn?}npԱjv^w髦6}z<v2:trx{l, *1,cṪ8ݒI|M(EnFdX)X;*u6MȡJEX2v-UwBUeUwG}Vݝ j2e F'` lG5!3і]  (M,Λsnw][+C /7+ ^}Xiձ==.V-?#MH01U2D0.2\[JL>][\pVFl10:.hzagS dHI\X-[H WBkj2.:CS>)yK 1'X B)ҒYi%ΑmAl;I-S!wV\XlMU\ۋ8NTۖS:Ao*iLThb QEFch&>*6BPM)Xy/K.j]KQwnѵƇK(8`DtLZXo@yAgNd ea itmjGXFݝ',^UT!()(9eae!bxMǪXR&jm"x]ua`j1f!C^B_7q}fXŗLܷFV IAX) XWqOI |y&y:ԒPa0eb.Z!/7\Ѕ_V?ݎ3n3-dVB2MHFyIqt hLԡhpt8w>)m7.m߾ʆ 1oj&Ԥ !-$U8KQȶVèàs_uk;=dIM{|]$Q2鸣*=Ց-Rl-K4Y"h=ʾ$"}\-$x龿D]9Km"D2VkQ1TtA*Ǡ6yblah)s(rzvB)֪2 0v`P}rUAzn)d3^kCL9WQX))uUN)k^Z- bMv̏{4ES5'S+*XgvСu|e D Z -{%r>H+!B<1~]^yny1-M- VIϪȢX,)j*e[ZvubqQrtѤ`Mصb(j^vjԨeVcRr1xFeTIejI21HZ 3#IyTUzƵެpn#)wdgT?]0:/Ɏ_ׄEjF /{a Ӭ+? C3ͺ\6HOrؠ z,.I;wv{fNmZ|nzfinӡo?y]ӯҺHM:Fj%/fuQۅ jT[7̖4qǠ>.[Cs_dEENLr{#ƹUC1V xq߮h=f@"xVTkv5aezMlпbҿ7[Vٻ޶,W=u!Ub3f'LZmMےBRN҃]ZeQ֋!Wg(H'ZZ4|*k޲htL>QcT*yt5Tf+/rd8Uj.V'l&Vk/&>!HVũ[5AN~h"_w.+Xirb.( l54 $ ^2`gIp (A{i5JOlhQ9pSs%PshܛfO/| $rz褞 \|p3oX?.1͎'TljTQKt0":rIU.Ga3:"? \q0 FKW\5ӧQ,Yf>`\ҿdxx1(>l^snWqMTN,qB*DJ15mS߇قf_Lgˀ'W(~w\O~{No%_T~dz?;c_/SO)?|'Vzd)ME["ghqB (E>AнlR8$>6wm{bt/(Li”(Li´;@nj@wu|:psAg#90'`Tۃ/avvWN﩯>TQyWq桘w8l,޾k%}'!н>cd~v\ʱK dvpE5z>kN>"y2ӧ$=I"&q'վe[~&QU<:2dR0d9)FRDTB%\SPJbeM~\/܈cqN8'˭qJи 3Z1 7]XQ‹toMhGXqg LOrX_9m(p1hRv%?˅{1|R*^>5g8bqq 15@JJJ ):ֲ1+]V%]t8v\g7?VRRQطDON(?n~Y t 15H~'m>3"`dnd_Ȝ5Q Z2CC a`'IqBcnSxXȌʳ}}6/{IakR㩍%Ap 9*mTP xqUa& mE!(QZ2>FCL^&[T)F81.KnZ&CH_4L](aAdy1(i /jFȺ量3<G?3Dq|m/FFCkSa} %K([g6ͺwjR7nm_kWZ@*Y =}I'3J#1.WK5J%DȱV/o=$5TP`.1T"sIxʔFW85oxJɂhe<519'i%EoCXYxEj%$rTzI]51TSW ˌtV$u/w1%gE:+ea.zi \*'3ecLt^0 @|QY!tŜ!8 ľ r/iZ[=Y"Ia,nӑz/}&l._JIꩱKm_Q^䢩 ewR(^)D`YnmA4VВzf@0 ;1YFdrcԪnj{P C0KpuK:0,d 2Y*883hqk 1 ?Tf(ko t ym *c~BqQmr㱴:pa bZɬ ǁ !/A`` 2QdDYjF&Y2z~c9J_:ZTv!zT*pBq2U5BRxJΫ}@;'/hbz) 3KT&W252H c"#p!\jr0a q88N8Řj?HN&; N\H x6onFѲߵ`b_W$L`ZĨ9xU4ߎ3k8UUe!y>uKj["1߼vR^JuMΘ,dg\)oe_BWWb5AdǪZ_J"Wۇp‘z⪍6ݔˆ15!|L nCx/duTG]j4{EP2g5A2Q8VS5vQ+`eKګ.0UƃʸJaѤW.*=\x/ xBfIkai>W^Q_͜n. ᥏wKYrGЩiqW7QkoVr1qSUk}c))vbA!rW ,QcWJ cfՌ Kiwuж 3\f?\+Xxm%<_sʻ?f^vOm!03ìS|&vŴƩHJ+UIr[J498J?67@|km#%E;S['{|"O9hWH^:sTQLan#V,z­agbBOzaxۧ_F疧Hx7i_HPM_nfn<ѐFd)oZrcuGX *4tyrV%ݒ囇ʚ&^X7fPJJz52-TASqr)Y%`S8 {b-6&xwc߶Ԇ̻G[ĭ:xX^ ,eWfD` 'fpaqtz. {`Yr%e.,?7]A\zg6;eW}{ D>rO |=/ioxJ[a$0nlnlsCɇJ NEtqWUZ4D^K3Ƃ(LYHI/<¾PWxA4 PKa6\L"Ua(lɴ1;ZTRwt}oY8}<}" {ܪ>3ϣلF8,,}P[Z^-^[v;Ugk{6_,eG&~,~♴170Gs[ZW~4NU#st~P[ZNЫ߹~'U?)͙yRzVJ{5@ϑ=SJ}P[ZnIѫJOJ==SUJ[J'Z.BJuXW)K)j5q4ewodۃlۃpzpwtIJi"un?*:7gbCթ} ܨd4X121ΑIzbm1 iQBBy'uE ])b2oo9MR>'Ca%?=Kfќ:eSdsna攜->-ʼ)V'8'ABw6D_}u9\3f9'4%,q hsE/?X&a3aS̿.*|?x|Y!xlv YONOg'װe汱,PykLjEŌxǟ,xp; 0|ҽ3J3CF]S>.~Y?tw;Uΐ O>\*[fڷ9skƺkg֟_ IVI LU>+RGʫv}]꿪aLze5'@~<hE~TL-SUp@[c-7tOn Nt=@DT#?F1ӹ;ki#M+9:Y}FFU똒/&H ׵L*wi%~lC2IG GME%]iBA*┬,JJDS"Ѱg~_1\_4dJj/!TzW014/"[ˊWI[U& B{ ]DM=.kF]6R}e[!ѓ޵6r#E$Ox $7A&~I`4l7+ɹ-٭֋-KMǏV^,U_+)y9,͌jZqȜ5H!1"aXEPS,{pܧ`~I_W9~n~E&+t˧'s?J#P>}qwx/&VȉNx?ş~{O."|z r&?|Tw̶eHûPFeT(_Lb} cXpcĜy\ /_ܫ<__'a-kt}4#TȋI>1Kn};EX|x/(Аr ;n)<үT_oP |%Ky)B*. V츅#G`V<ՆH?䝱+.m @]aUUDo\W]nt%NQx}T_ש;>{S$pB<X 'ĝQo=_'cķoqi[y!7;zu#4:RJj""TE\%~,?iub_[~0z`lbhL#HَP y:1E /5&?tl/Ch%8I||^Y9'kJ:dRa[2;ƮLR~>)_wO{2B[J`Gjó{v2iE*jrP 7lY=;|а)]|~wq[:ӗˡ/}4).!z8t~_ 2r؊iI@C G]؏PRV{ĦZTn(n*zT,lDyP̔m]>@ry8Θ !6Uq3YI0:,G) -9)T+U,S5]&@Nmjl_1"B7 Ҍɔ))702\ieQP02d#!E(E8MI%ބtBF i \05Ϝҩ3ܭ9hn Yn i4;@)IIʣE3ۣ5#=ڐVk8e+V{Ϭ3xZ)SW?'''壙ŇGH[ 0M4uks aN/T8}1tsfuNIDs:Pv)EI"0SJ%|&šŞ!r\u| tbQ bnPELKEbIE|D gXqRm)UQXKTS%է]`Eݪp`ވ걍F Jvts Q춤& NJuHQ-nt}Og&X;Xx5?`ߐu®g#P OmFɏpF;ހHeQ']"D#9>` Eˀi .3;[Y)A\a0: i[iX"-tJJTlm4J9Cbo0Mm)$S3w6Z{S#qLTYS gq0C_3J DLjv.-^.Ó3}O?BA:xu-VzdeExס=ubCUͥU$0ިUxA'ߞ2ڈB]WwWVYV A, CU u&ABE;5y0IԒ6:p;>I ԢmSTٖu%x"Jpӽt;T7(e嶣]}n4vEޖh>"ST2ڮ[4| j侕$}xLJ;ީ]]Q~jdƺl98?"Fjr/@u&R >v).LjFs?l2STJںkڷCH<2%-3[3ۺ8W$qTh>gw"aKkSa{{*W=3a\[X={WP(:Q8@GsS{Me$[2RXkmΔ5^fbͨ|E/e74/پTHy\O?_Jh3@3|)zviꞙc^~ NPڱzQpC]%!q,lQjt{%?[-m|b$ZM*/KT7QC̥Yآ6>k R n>FGTwJAwXӿm[|I@T{{k{h8wt>=V2aϿSLJɣ+nAӬhd۩QI+'Q4\?T B AqxJb_R B[91ƪcPo9R>|ٟb;p|M*SC7}O=,?;δH%L#t͊XW󷇢oce%E̾@s||o@FF ̉>O®RҲK\^rQY2%N9T8K!59))0__ldߵO]I1 ST2wGg͝f9]Yo7ޒo|=N)2*+rH # RS^al2FEn)ÙF>S+դDgdb|*ZM|ߝŸg?9CӢsl-6o(`F~ Y`X5f=NVn"tO|`ަT!ņqfiS# p#Iqf1T+,&RqT* *-Vi%z;G<̣TS"PƧQ %#^inwo[=Ky$ LhWK Ca ꋨO~-5t\z $`)wb~fTQ W_u&&R-s oа^ _ 8Q=9-b̑CUVŴ{5;nty4>!aAd;1VPL%r $}c9;1fl½}>mIH`bEFi5z_7v@V1f96,yj3#/Eqk۶OyuJFc"Y V &K|ba,7-cW{WrK{j[nyz%HswqQ¿=:|]mHGԦ2hOm@(o9 |UG!@-0y/gOזeL .@p)΄o`T_שs$3>$JIGT*w._ g2/|y-ۻ]ް/sEE,W#\ | ZR^V_wݞ7ާ 1b6U{~usV3w6c1|JHy$n1_ڌ^?e+yTy]VP^! $G3x9nqv)j|}DRY[:`$â {%&-)%~q!\vQ}8c8ߒU>ۻeX]w|w1$UdٮL10_./&a˚bA`o~aRC q?n(Kt0C$f=ďzëfh0M<;` 6+UQ:A%}1lxOK=&n(:ei>IձZØd͈D Y"e/#ZY9 qc],6 J^^[}kx|f2\ ˱@vlIvu<2-KT%+jI"ebSU$IH tf؊ G煵'NCgO|V^ F~>X`49#4vn?f̃zl@ Hiw}uLj(Lq V* 0~H69V2;Աʐ<u.ҋ<|"#;GB{*8  d" Kd9 Q3Vh} SʫVHZ{)Օ$v=lB |ySɥ;a鲟!m2hj8`x=P' 7d%껽B8:5Pd2_r> T!0njSf@8Yز* (njBM(^N] i8*D/YeUVR,$8Hl CgȹS8 m+0qXMص6g`jrIو<"(_K:9 {h<Z)  gl6dfI! n'sX3;"EcK=ͦ~LJbureOis5ijwXϯӦgШg- =xmjAި JSA4P\͍iV ^Zҿ,'!mWTأfY8 B0!S9wMB#tBMAo]RPX*{1VM\m2E#cYJjdHh`/ZlѳPWUC]><Եݔ7F*^QqZH2ٕң_gwok[ l2_ZqRL| 3~}Գ"3,"i&+aҪ!"jSQXսC`Yʝ>hZOȼvD&dZ6D?'k,R(NHCi㏊SmvAl$lo? v &[VDV 6Qc7ҎAïg,lU6nNp{cfB.iyV[wO|(Ͻ>poGv-NrRxzKW+x֋.HTOyjK Dt ^̼\o骏d]d6wٟO1e NƤe@|Vw_?9 'Iu tJ)kqеE&:cUfm&"DFo#p rUB7#҉W|qp,)|p k L`ƹmaϿ~jzVmϝ m*폇E4YwޫZuMƤڽPJOHh3' @%2eo5%(V V=Z8O;_LH^B^L)8Rҽ`ys^7K*߮&S๎iVDEF|9{J'F?x,b|Zl&*!b3.Av bɖȞC)'PcE_ ց&Y٘byR.P GO+h'{q2KY*+B}MiB{z0.IF)c[)s,O|&L,!cRK֧!Q`\NرBl֝+*~v; O2=o9'72'&BOL_|>y\6TQ.nְ;7"if^޶ ڞן[5\a#:5tp4ݤ<*;x7. q/}0Q4,{!%[@H/L9r(5Xy{;gQre"+Oi$(,;TSx=^;Mn9qK~#QcM2%@b筆bSʞAqj:_¸8I^hʼXDbNO a͑dbtj(zOM\6 $SDeIQrefTPg]*VKjxJ 8<֭_n SKo[b$l,{T֌&̦%h8$9^]X|@ ŽGs1"f_yCGV.{/4m+yO=y-WdAtE.CH "Omz%kϝii޵释XvV%?.a(Hi|clK=΂u:(V ~V[yn*U1 bRwxQ$Gool,)7+/:^=W_N3&_7e{ߖ%ر=={5=g_{ֳwͮϧWo@?efv{7Ğ1ƛzߝk}ʨ%e}Ci(l{AˮX )ݜ ]S~r3M7տh98GBǞL)GyGb̃,wmE[?!.ຢC\'eGGAzzfC / SS4~Z>rKu)mfdžWȪ}6 S':m_A!TjB6dl%j+<7*I>pbykincŻZ8ԭdU*T?-A$m(u{Дl2Exf$2*4ul <~v_^e]V߾iq6q"mmVQt "xjY)R] [r>|NW40P裏iQѫW~cbn k7r'1ݣ@Fv]’b{y1 ҏ>#?]ڝ # `{~=Ѭ[}a}zb R!P!)Jl>b2TBѱ*r_RzCD/5 ]R&rdIX*ʗT^ĐQ{&OB$ݚe;DB[ZhTR7{@7}z=?j<'cܳ^2+zG9PY1iA$iJ3̧AG{K3@Iyr~Ҍ)?wfZA{- zVmJ3*WaYK3ޫ5/hIuH$Z1V JNZ wNjAn >>^e:"%qvlN*,(9'r A[δS9E@dNX;GQ tK|l3wJp<9) 2ϗ )w'}I9\ܮ0m}cъEl9EcTON:<]#z6XUx|π,+](*&ZC0Y L-* |bu\[>gHXT a;+#jjK*OzFOOXm_?כPǡ8)-;ShHZ\eۯ> ԠRk N}9a#Fy7&ׇ=GybbguuI#]U%PJ$Z'ӾSE?V$|Wa]Y+] ,'om7jU4r~Z/g{8M-Zau;:BeW@&bj9O:]cuϡ-Y~9z .?Cǰ7U@FwAT >R'gun&|YBsj0SsPo:iTd-JPikL|w~c@v~êܔM><"[kdzlF(IpX/VdQ0h]V~[Q%1&7Kb~3WHlV;Vo^TLU ̐&mFo=8=Yr$zBI߷C_:/ ^P`RM*.ُ3 [s#` |CB{&tV&dV^isSl`[\&WbYthNYO_ֲs7y]MBjĜTc08BiiBu^iʶʷ_^= ksf)ک Vֲ_Vr-z0d& yBoߔF869q^MUrUTV i]Exp.g8n=޷H h>l.eŧt7](ώL[D^bDJB(e2壔%wf>VN Y H^)hGHHL3vJ_=@ 10+B|m|uiz>LY'',x|(r(\W|d]v&)6æʁ[9OPTc7,T6y_xh~?a5r çN$Wm Y) <RKm8rX9($j|&}hY RYmn{-TC U}nFPuIO=>p`հSv^bٞnN]t6 *p0Wa:ϜKad$Ek4պ7f/nox%sպUJu)6ϖNV1ಮ6}CEM?05z²3_<~fbܚx;lexny*4L?Jא3kp9E53-[>e@^(&nh5+ht4J5f^.ZXp%fN{媋&2nIJMZRFwF XKm1~=&u0/:^\:T؈Q{) o zb)ل0![^h^F^\eg j{Ĉ4ed4RJ?e/~\}L0I}[Iqt;gj򓳗0nLlEz6tz $M3qgJw @j%crq *Tlo 종WՇ=Q4*pKΪ\߆<^E^:3͓?VHG2z!P=JhbI|HॻHKHd ܇EW1Axc}.\#RZf'QT`4xI)#`zj)EpyH=jk+ܤ*0ebjAQQ`NHN۷@8ȋw]aVVQf%1,,M+DL}D k\w%\1‹HB"PcVOFCR9q@5rVکsMuo/1.e$Xy$]SE͜>eDOI[ꡫ`blWFVTܗ=$8O\/t&e3'.J 0a_aMţRLHC$2Y(Җc P+izV$>uT0sE E`[7.[tlv?#i#IfGkN\/Shf`3˖[E?*zڮCT^ / RJ1M=-ڧY./6  = ޭgZ/5gD{{syV]|9#\Zn'D<{ NQqZ?[Uhx$tx'emӖw~)#H% |Ю=.d$Aa6Ԫ(i+:Y#uov}UE-6wX@+@,BQX}Bpj7f<c;3Ҳitb,0/o*ug\TEԊ>I4$kmH - !`qvf/'W2E$%^俟j6rhjñYmtǙ̸27sO4`7/0 q*Ora4_-i,cEZ ZpWg$t9|dP]e'fuz jyxrHj~:Vj9>fQހp|du d O N)1U.RZ5} egI ! -G2Zxhޮ53B^y2i[6lj{e,5BDs .{t2x3-&; jBԅX|z5L`^ +5𺒑|Mo(d{50x+oUb%~5@ 'ڸǝ($3R/e)gnmc;"pvY%߮PaFAQP;,d6[dE3fƬ"@2^gGAF&u:Š{{Mދx2)4z҅љ8Wu[>KY 4oN/9qm!Z-k'үօVcfz"}!$J꽵ؾBj忴C~tJ]'^N o=nLo}It%kQK߻t\cnvHV^DsuBdDN>BFFR),%^&urREh-ZڈAzwR2!LAP˃90# ?JSgz }F+Fp ;̮f@m_]9P-iaO,h*aO,hX2iebA3>L,h]к  iQr}1l8hvt=0 76f:fа 2z L/A6ez b@a3T\s1l]1=1lP.dOz !CMS>w fgSC[qpU0ҒQJ˨ R&e!&_&#~Kc5tgCa>2z YYQVz heF,Hc9(+44 w2'}om CgK*ۉ路!4YgM1-g's66["7~r3ǜHLfԯ Yx#խUpҨ 2Ȭzn28L\T܊b CEX s6' ('N;P`\_m& \7L v7laSKr.r:R1D(ɜ=Iq+)PԼRAg7m'Pe{QEr/8L`x 0Yza=Q,12l:h m{[WR2boc +wry3>c0DžD1 []=o{<͋g1,sac GgJX$bN١ߨyY< l b6(%`iQ։P0tģxHF!%̡v7B_=whk;VThS{eiH0qׂ`5I3~ \2˄qw} [ଳ]-\xY{qL~xf,$#BAyWgL~:GR+!Twc? `rF)>~D,*N<𙆰3+8Xmu_:t`'c}i">-.}\ٿ6ek#$ry<Tr}Ks!;d3ή<@Z&jr!3?CPe"|IA09+ j;{Kc}Ǻgaϔ99Xot@672_` T@-%N& EmĘ(v ;=/莾5io5x\~PF\h|,O%%v\Elء$@^H-a9{Il_KV oF<(^LJ`y EVx Fo{ @ nh!k{.G@3JqKx̪bBwjCl5`ɖQ?TVQ?T`5Q#*vw[Vo˔{7l/Gc|Xe\RM&Zj dl#LܿL'mck3 :VaBR 0DxݣwB뇖-n-9~gUzގR+Q ˸X q̳FKtoz&**~܂֐t;`%Lj5mLGal oT[q+a|[3fzben?A,1yr|$h@Oc]*{,限C@[ tpޏWI\-Wc5k]\(=2Z1i ǒ*L`'3>(h^)݁ѷսVt8cāpU$7M1gstr{9lMĵV/O䡂oyK4)۷_UO,t̃MDmxhb9$:p>di| ж;_v7!6zOއ% :>)QE$#.)_2#bowQueB3xZh! ITa'5ՎXu㹓 3F5.75#>|TbгVU{/X|i*^u!7;RM tZu|{ת[_RвM`Ȟ6(yzժ ߳eq}K 8BZ.y/NR %o>b]gŬOnx(W\Z"g*Q-/|wz=-_{uhAuU1:WWp-ȟST>8^SԟiJJ4^rʩ\jvS]VZ|lKCiE_A-i4@oyQ ڝ+0VV7k*svXS@+Q=Z)(ZbMA(_`yN"{azڍ^zZh曆,іݦ~MrAׄSJk]Z~MkQiM dj_TQ_\J ݋m7ʞ1CإRltmx^~tI{A%ckv}􂁩ts&{xNntZ34ӉnS=Wwc @j*{tί>Lѻ:m9$2\%LQD; &zL3"!;X&'7_ +Ydž&[ }\FѤL cJp-_jJKIe7KQ&oATw#/21NqFbE`?Jk\kJt}s-XS/gF!~F9`4ƌLxVǟ_5dL2wؿ=!`䕷sba aJ/s\&܆r:*ۚKҪ^N-U/X`5h;LI 50Pt"Bx(?߾0ۿwo[QYߡgN__ٗI΂XwdeBc&4NEq5R41,t3^8鄶t>OJTӽuگLUo/r_Oo>~{SrrkiJR>l|c`x5VT ,$V 0Ve:Bue#Jx;Q)/J)Gnw7 Q"c7ydu: Aۻ~(L6NζgNñ54S uaL);9ؤ4\_!Rs&0ђpXeYBFqY,s1V\f3~subza!VHS[j)39;3oL!sUxg?n/ƾqoKHIU4Db&X/ɗH+-_ ;q`_L'fH MfBG߳FɛbMTy@S# 䕎CDIד,KKj2%N */B+EFe<$1ЪKjjDK@gPx_AJB kn.MoJ%oZ6w>6Oj}oOn:c!(~(r%|bG_&9DQ9{E/o0f^_]]-N^؛{Bc+X޺˧cȉ)=6Na,Xx^ S'a,0A.|еtT"xtw%HPثs0wUFHSeV[Im0Æ)VX07oI(dE1rF xO;[d3WgW=x Y,hxK7 \2 kĄO$j˰bHjx} iNrc!CX `=ukͨ8pO$ݸ#d8^ ۢw.a^ޥzrNF,?7pnߛmMZpaݚtN'ɛ7;kF|R! 0F-C\ TXQrJ!$ǿ83?}CU5$8;_F~sdW\n,t'k\8`9v!XMUG#rX. zwQ̜>m=? Tyj / ;]=^!X%mq(nG+$B E+rnݔh,\+DϜPd!X{ZSk[q+KEQ^F ݻ\0:ƌrp)U^*@ RZ(Cz5Ԧ =UC."~UNv]\^E0"%Y)j=R}Lէ*q}.՚Qև3X@ľYwiɱc3&,䅛hMqzz7p?*beb:ݺfNI{-3л5a!/DlGލ1+Ӊ}Ga:բ5ߗ9л5a!/Dwl|vCF(m ύ ʣ^xu<"ꟃtn({smϠeLM~7S~֓%'&:ϻsi5wjyZ3by9y9F>QbYcz۴s5X7'rty P9wj]nY6Jfiŋ_@.7o3: d߱Td4"g9E#"4*E2ݑ/yCX]-^43lC5ۺ^ {5c< t>d|8_+ BW):lޫ;mWWP KRMSߵol?Q~y̲v`$K\ɕҢDq s8c DD1b1r& Mm2-)WW{5>vΔRFGꉗ(EL6-$P.ηuAdLע|-7_wA>w,9l/i_ s;+\>\of{6wYKӒw 5Zڝ'`;ɞ}G|w1j=1Op",_"4YTC[PA|*~,dvb`r Vgk)24Q̠Dd[,RGMJeM]aBW R9(\)ad,C1,&Ib?pTdiZRqQ *9ߡ0 KEU{ ϭ>ߥI_֓[Y)cMyV$@<;R8wY*՗iJ*|RQ\aӛU1e*0+;3OrH[7Qt Q]cAv"錗"σ+ʏ*QW 7IJRɌa)HI22Rf#iXja];n!weX yIӪUi]$tEՀ RN U#)6FZbI9$P9dGX'*JԨ$}Jّ:iIu$Mz J-!tSmB HUpь^PMm,{h3Buh@bX*e38_!ִ ?2YFSH%2C3))Rd) nF Ѫ&Qu@_;%Y!{+V 㦸Nq J s (cIr+Ky;2("QE,#TKZċXXKJ2 f:>:RlO$Ͱ@B )Ic e6bo@bB*Պ4O}mچX`#Cc I \oux<]% tg$uD7sXV2gc$)Qط^JGG[Ɏo^r>Fj>w^ާV]j"Pfڧo8*@bO~鳆,$o՚Ěo}%u/GoզxTZ ŠDSֺM}-}㪖z(fU}>3mX-p;n+j KkREWmPM1o;m&;m&V(EH"|je/ qN;*v,T"IRvjB^ߞweoH5Ή|GN^;]G]f۩ y&cSb߻ILzE ha|GحPz-Bhրp)c7ukmv^`GΜwa HbxKxpZٖ>|QَΆ}Z߁%=P/5fА˒*+5WN6z bE b }F6 5 Q 0e~R^T>D"IW@y75; fւ.: B !h&e*uƢ4UK ZcHGTQ5[OznafX1HGn/zKy$l#WxQ deJTWàUP'Jab_-pөka~WVT) gwXgAK)/Q#2n BV3HVj(!BPpͰ|hoOZ"ɐC!PJ Db1,·tAYahT a7@ 'p闷b-.%4# cg/qNh$q__5#Cz*Sr|X yc5֬9!2(58 Q#9*Nfȱ,&Z!)p|if f93qSX,uH`$uT"U$~oj=)@))`<#ڌY̭bf).S c3_>ISlUb>0+]zRs1Z;V?7NߣZ[_knb Lt,`->U]2ۇnNdzK239fI!6d3Ɖ:j5> a1fY|TڌmS3nj #8UnCi5@sӦ",?Y6ITMśhP~~Pkkq+ޝp8$s(%(m>=/e;$ڡ(TSgci4h} #oqd+gvp Ь[֕ҫsgWS'T`BDdB9c42,tQ^5x&4o! n,jٽ>I7<3@Lъxъ$ N.K3k"`j~=ؽ_ܯӓ_sԀXp 4"`Hѩ1K)Y7N]IRn |rp6E+F\hvSΧ=UKG>et!׹\fk urݮW>%٢[met!9clsp`Jn.(B&v4>!7}/3^tاVR{&՝Ij>]iX#M LKXD*_%\G"X:z{,]KHn\RPKQoQ,]4K-Xj{'SK-Xz@M̫Y^R/u,PeQR.*Iy09jޭ,]0KΨnvp(t5Yepx%+u,=&5/]6Kun\w""X+P%t,%Ա 09j.L:R&KÕ fUPD^:zuBlZc]_`9j$J-H:B?"Xz@?vKR]Owg,DPyYʼ]D`)+jnKf3n 1^)G*r ސoM]l45@WLVm`vU=>X`=}Ec~$hk&{~|WO>>C癤r?Uϵ bi7lsaFVײtf7OzxBLc$uKc+oqknCΦF7E07H$`'>h^XW8_BobD;F/׶{+y{:P Pp[6 jkiomU ZLxM颴`1@uy;w |Rh#x&1?g6Σ Xblʛz-;79=?}NRklay>EU+:Gu[I P5Xx1cG <"଑)E-NZ-L瓻0>;Cg';&ѽLs_+^i32e`qde9;AfyavXX%ꮮOAJCBD*'h݋*|̈īSkUaBT⮊Y/A) @uQBӚzh]F 9/&33[var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005620376615134203057017714 0ustar rootrootJan 21 15:26:07 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 15:26:07 crc restorecon[4738]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.601144 4739 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603884 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603901 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603907 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603911 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603915 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603919 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603923 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603927 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603931 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603935 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603941 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603946 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603949 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603953 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603957 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603961 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603964 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603968 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603972 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603976 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603981 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603987 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603993 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603997 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604002 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604007 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604011 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604016 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604020 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604025 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604029 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604033 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604038 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604043 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604048 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604052 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604056 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604059 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604064 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604069 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604080 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604084 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604088 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604091 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604095 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604099 4739 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604102 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604106 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604109 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604112 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604116 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604119 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604123 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604127 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604131 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604135 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604138 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604143 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604146 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604150 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604154 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604157 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604161 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604164 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604168 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604173 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604176 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604179 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604183 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604186 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604190 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604401 4739 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604411 4739 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604418 4739 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604423 4739 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604428 4739 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604432 4739 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604438 4739 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604443 4739 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604448 4739 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604452 4739 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604456 4739 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604460 4739 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604465 4739 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604469 4739 flags.go:64] FLAG: --cgroup-root="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604473 4739 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604477 4739 flags.go:64] FLAG: --client-ca-file="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604480 4739 flags.go:64] FLAG: --cloud-config="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604484 4739 flags.go:64] FLAG: --cloud-provider="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604488 4739 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604494 4739 flags.go:64] FLAG: --cluster-domain="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604498 4739 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604502 4739 flags.go:64] FLAG: --config-dir="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604506 4739 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604511 4739 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604516 4739 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604520 4739 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604524 4739 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604528 4739 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604533 4739 flags.go:64] FLAG: --contention-profiling="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604536 4739 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604540 4739 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604545 4739 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604550 4739 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604555 4739 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604559 4739 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604564 4739 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604568 4739 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604572 4739 flags.go:64] FLAG: --enable-server="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604576 4739 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604581 4739 flags.go:64] FLAG: --event-burst="100" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604585 4739 flags.go:64] FLAG: --event-qps="50" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604589 4739 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604593 4739 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604598 4739 flags.go:64] FLAG: --eviction-hard="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604604 4739 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604609 4739 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604614 4739 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604619 4739 flags.go:64] FLAG: --eviction-soft="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604623 4739 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604627 4739 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604631 4739 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604635 4739 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604639 4739 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604643 4739 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604647 4739 flags.go:64] FLAG: --feature-gates="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604652 4739 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604656 4739 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604660 4739 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604665 4739 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604669 4739 flags.go:64] FLAG: --healthz-port="10248" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604673 4739 flags.go:64] FLAG: --help="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604677 4739 flags.go:64] FLAG: --hostname-override="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604681 4739 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604686 4739 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604690 4739 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604694 4739 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604698 4739 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604702 4739 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604705 4739 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604709 4739 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604713 4739 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604718 4739 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604722 4739 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604726 4739 flags.go:64] FLAG: --kube-reserved="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604730 4739 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604735 4739 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604739 4739 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604743 4739 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604747 4739 flags.go:64] FLAG: --lock-file="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604751 4739 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604755 4739 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604759 4739 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604768 4739 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604772 4739 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604776 4739 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604781 4739 flags.go:64] FLAG: --logging-format="text" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604784 4739 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604789 4739 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604792 4739 flags.go:64] FLAG: --manifest-url="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604796 4739 flags.go:64] FLAG: --manifest-url-header="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604802 4739 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604806 4739 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604815 4739 flags.go:64] FLAG: --max-pods="110" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604832 4739 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604836 4739 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604840 4739 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604844 4739 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604848 4739 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604852 4739 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604856 4739 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604866 4739 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604870 4739 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604874 4739 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604878 4739 flags.go:64] FLAG: --pod-cidr="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604882 4739 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604888 4739 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604892 4739 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604897 4739 flags.go:64] FLAG: --pods-per-core="0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604902 4739 flags.go:64] FLAG: --port="10250" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604907 4739 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604911 4739 flags.go:64] FLAG: --provider-id="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604915 4739 flags.go:64] FLAG: --qos-reserved="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604920 4739 flags.go:64] FLAG: --read-only-port="10255" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604924 4739 flags.go:64] FLAG: --register-node="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604929 4739 flags.go:64] FLAG: --register-schedulable="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604932 4739 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604939 4739 flags.go:64] FLAG: --registry-burst="10" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604943 4739 flags.go:64] FLAG: --registry-qps="5" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604948 4739 flags.go:64] FLAG: --reserved-cpus="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604952 4739 flags.go:64] FLAG: --reserved-memory="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604957 4739 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604961 4739 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604966 4739 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604970 4739 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604973 4739 flags.go:64] FLAG: --runonce="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604977 4739 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604981 4739 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604986 4739 flags.go:64] FLAG: --seccomp-default="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604990 4739 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604994 4739 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604998 4739 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605002 4739 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605006 4739 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605010 4739 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605015 4739 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605018 4739 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605022 4739 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605027 4739 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605031 4739 flags.go:64] FLAG: --system-cgroups="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605035 4739 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605041 4739 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605045 4739 flags.go:64] FLAG: --tls-cert-file="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605048 4739 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605054 4739 flags.go:64] FLAG: --tls-min-version="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605058 4739 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605062 4739 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605098 4739 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605103 4739 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605107 4739 flags.go:64] FLAG: --v="2" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605113 4739 flags.go:64] FLAG: --version="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605118 4739 flags.go:64] FLAG: --vmodule="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605123 4739 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605127 4739 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606677 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606692 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606700 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606704 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606708 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606712 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606718 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606723 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606728 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606732 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606736 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606740 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606744 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606748 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606755 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606758 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606762 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606766 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606770 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606774 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606778 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606781 4739 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606785 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606789 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606793 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606797 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606801 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606807 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606811 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606817 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606838 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606843 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606847 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606852 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606856 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606860 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606864 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606868 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606871 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606878 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606883 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606887 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606891 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606895 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606899 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606903 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606907 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606910 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606914 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606918 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606921 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606925 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606931 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606935 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606938 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606942 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606946 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606949 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606952 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606956 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606960 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606964 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606969 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606973 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606979 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606984 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606988 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606991 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606995 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606999 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.607002 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.607008 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.617458 4739 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.617502 4739 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617601 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617611 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617618 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617624 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617630 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617635 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617641 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617646 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617651 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617657 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617662 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617670 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617679 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617685 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617692 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617697 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617702 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617708 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617715 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617721 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617728 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617734 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617740 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617747 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617753 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617759 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617764 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617771 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617778 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617784 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617789 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617795 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617800 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617805 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617811 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617841 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617847 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617852 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617857 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617863 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617869 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617875 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617881 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617886 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617892 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617897 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617902 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617907 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617912 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617918 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617923 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617928 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617933 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617939 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617947 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617954 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617959 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617964 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617970 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617975 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617980 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617985 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617990 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617995 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618000 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618006 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618012 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618019 4739 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618026 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618032 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618039 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.618051 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618215 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618225 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618231 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618237 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618431 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618437 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618442 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618448 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618453 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618459 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618464 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618470 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618475 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618480 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618486 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618492 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618497 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618503 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618509 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618515 4739 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618520 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618525 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618530 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618535 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618541 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618546 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618552 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618557 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618562 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618567 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618574 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618581 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618586 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618591 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618596 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618602 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618608 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618614 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618619 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618625 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618630 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618635 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618640 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618645 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618651 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618656 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618661 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618668 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618675 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618682 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618688 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618693 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618699 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618704 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618711 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618716 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618723 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618729 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618735 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618741 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618747 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618753 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618759 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618765 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618771 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618777 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618782 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618787 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618793 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618798 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618804 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.618816 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.619297 4739 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.622674 4739 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.622791 4739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.623499 4739 server.go:997] "Starting client certificate rotation" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.623530 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.623692 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 06:49:01.231416728 +0000 UTC Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.623775 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.643211 4739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.649336 4739 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.655969 4739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.665440 4739 log.go:25] "Validated CRI v1 runtime API" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.682352 4739 log.go:25] "Validated CRI v1 image API" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.684136 4739 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.686660 4739 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-15-20-28-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.686711 4739 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.703436 4739 manager.go:217] Machine: {Timestamp:2026-01-21 15:26:08.702501845 +0000 UTC m=+0.393208119 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9a598b49-28ac-478d-a565-c24c055cd14c BootID:3e0cd023-7dfe-46d8-b1ba-88fd833b7603 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:44:39:a1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:44:39:a1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ee:e4:b8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:35:30:82 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d5:2c:6a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b8:db:f9 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:f1:df:68 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:86:55:fc:41:88:74 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:7a:21:16:dc:ee Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.703611 4739 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.703738 4739 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704019 4739 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704169 4739 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704220 4739 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704420 4739 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704433 4739 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704637 4739 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704672 4739 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704965 4739 state_mem.go:36] "Initialized new in-memory state store" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705054 4739 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705639 4739 kubelet.go:418] "Attempting to sync node with API server" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705657 4739 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705679 4739 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705691 4739 kubelet.go:324] "Adding apiserver pod source" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705703 4739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.707164 4739 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.707189 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.707265 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.707277 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.707336 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.707454 4739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708085 4739 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708634 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708656 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708664 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708671 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708682 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708689 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708695 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708706 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708714 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708721 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708731 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708737 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.711478 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.711917 4739 server.go:1280] "Started kubelet" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.712240 4739 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.712973 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:08 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.712414 4739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.714237 4739 server.go:460] "Adding debug handlers to kubelet server" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.714849 4739 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.715707 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.715750 4739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.716547 4739 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.716561 4739 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.716711 4739 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.715880 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:22:56.24911715 +0000 UTC Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.718000 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.718107 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="200ms" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.718413 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.718480 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.720876 4739 factory.go:153] Registering CRI-O factory Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.720901 4739 factory.go:221] Registration of the crio container factory successfully Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.720954 4739 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.720963 4739 factory.go:55] Registering systemd factory Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.720969 4739 factory.go:221] Registration of the systemd container factory successfully Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.721000 4739 factory.go:103] Registering Raw factory Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.721015 4739 manager.go:1196] Started watching for new ooms in manager Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.717302 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cc877617b33de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:26:08.711889886 +0000 UTC m=+0.402596160,LastTimestamp:2026-01-21 15:26:08.711889886 +0000 UTC m=+0.402596160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.728604 4739 manager.go:319] Starting recovery of all containers Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.733746 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734093 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734205 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734317 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734450 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734572 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734683 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734790 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734990 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735102 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735223 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735331 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735442 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735551 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735658 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735765 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735876 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735995 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736099 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736203 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736306 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736396 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736501 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736588 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736674 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736792 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736931 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737028 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737120 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737226 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737311 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737443 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737541 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737621 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737706 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737856 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737981 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738069 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738149 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738235 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738325 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738415 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738505 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738592 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738691 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738782 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738890 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738990 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.739082 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.739165 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.739244 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.739360 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.739463 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740197 4739 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740307 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740393 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740479 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740560 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740654 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740737 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740841 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740937 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741019 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741121 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741206 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741289 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741369 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741452 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741548 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741632 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741716 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741854 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741969 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742091 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742179 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742260 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742369 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742449 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742527 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742610 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742691 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742769 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742872 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742958 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743065 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743162 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743241 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743320 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743399 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743485 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743570 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743652 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743735 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743847 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743932 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744023 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744103 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744182 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744260 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744350 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744434 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744512 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744597 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744679 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746245 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746374 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746460 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746550 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746647 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746736 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746837 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746934 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747016 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747095 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747210 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747293 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747384 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747465 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747547 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747627 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747708 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747805 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747929 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748016 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748100 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748176 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748255 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748340 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748429 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748512 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748590 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748675 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748763 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748869 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748954 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749036 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749126 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749220 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749303 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749395 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749478 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749559 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749649 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749732 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749813 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751263 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751315 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751335 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751354 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751374 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751391 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751412 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751432 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751458 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751483 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751500 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751516 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751533 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751550 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751568 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751585 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751606 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751635 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751654 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751671 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751688 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751706 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751725 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751743 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751762 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751779 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751798 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751843 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751864 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751883 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751904 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751923 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751943 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751964 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751983 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752002 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752020 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752039 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752061 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752080 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752099 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752115 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752131 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752148 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752163 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752183 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752202 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752219 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752747 4739 manager.go:324] Recovery completed Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753011 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753048 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753072 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753089 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753106 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753122 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753142 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753161 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753180 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753197 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753219 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753237 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753257 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753278 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753297 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753315 4739 reconstruct.go:97] "Volume reconstruction finished" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753329 4739 reconciler.go:26] "Reconciler: start to sync state" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.762671 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.764540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.764595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.764606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.766005 4739 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.766542 4739 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.766591 4739 state_mem.go:36] "Initialized new in-memory state store" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.779527 4739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.781445 4739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.781512 4739 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.781555 4739 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.781630 4739 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.782922 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.782994 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.818578 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.881765 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.918704 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.919182 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="400ms" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.933944 4739 policy_none.go:49] "None policy: Start" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.935396 4739 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.935459 4739 state_mem.go:35] "Initializing new in-memory state store" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.019789 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.072943 4739 manager.go:334] "Starting Device Plugin manager" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073009 4739 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073025 4739 server.go:79] "Starting device plugin registration server" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073419 4739 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073438 4739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073630 4739 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073711 4739 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073721 4739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.079858 4739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.082093 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.082193 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083183 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083360 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083615 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083697 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084157 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084292 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084447 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084483 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085099 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085307 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085351 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.086042 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.086102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088549 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088775 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088898 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.089560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.089589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.089702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.089892 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.089922 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.090044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.090065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.090072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.091036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.091072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.091108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160177 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160346 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160407 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160519 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160595 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160640 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160676 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160715 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160759 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160802 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160874 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.161064 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.161103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.173838 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.175124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.175169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.175178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.175210 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.175773 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262504 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262587 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262643 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262667 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262687 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262694 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262730 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262765 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262773 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262842 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262852 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262891 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262929 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262979 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263006 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263033 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263008 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263095 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263157 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263147 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263225 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263305 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263369 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.320144 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="800ms" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.376267 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.377405 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.377448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.377461 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.377486 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.377905 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.407437 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.413543 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.428849 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.430675 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8e06d0513f0db5e75642f7089bfb91be1c217b3604e687a22931fb63c5fd5a65 WatchSource:0}: Error finding container 8e06d0513f0db5e75642f7089bfb91be1c217b3604e687a22931fb63c5fd5a65: Status 404 returned error can't find the container with id 8e06d0513f0db5e75642f7089bfb91be1c217b3604e687a22931fb63c5fd5a65 Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.432270 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ab790ee0d8868a46ecdb3e49d9817e198f0e21339ccb7f2cff5afcc9351f3d63 WatchSource:0}: Error finding container ab790ee0d8868a46ecdb3e49d9817e198f0e21339ccb7f2cff5afcc9351f3d63: Status 404 returned error can't find the container with id ab790ee0d8868a46ecdb3e49d9817e198f0e21339ccb7f2cff5afcc9351f3d63 Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.444157 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.449978 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.468026 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ba2a4e38d6d4161550bbd716138c671207c9559c8804246343371fa26b67b36f WatchSource:0}: Error finding container ba2a4e38d6d4161550bbd716138c671207c9559c8804246343371fa26b67b36f: Status 404 returned error can't find the container with id ba2a4e38d6d4161550bbd716138c671207c9559c8804246343371fa26b67b36f Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.469343 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a371d518e187ec3488e64689c5b07df118a8e26df19bad7173bb62cf99419d8e WatchSource:0}: Error finding container a371d518e187ec3488e64689c5b07df118a8e26df19bad7173bb62cf99419d8e: Status 404 returned error can't find the container with id a371d518e187ec3488e64689c5b07df118a8e26df19bad7173bb62cf99419d8e Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.472551 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cc877617b33de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:26:08.711889886 +0000 UTC m=+0.402596160,LastTimestamp:2026-01-21 15:26:08.711889886 +0000 UTC m=+0.402596160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.706076 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.706147 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.714660 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.719352 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:52:20.287469828 +0000 UTC Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.777980 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.779057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.779087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.779098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.779124 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.779569 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.791637 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8e06d0513f0db5e75642f7089bfb91be1c217b3604e687a22931fb63c5fd5a65"} Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.792945 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a371d518e187ec3488e64689c5b07df118a8e26df19bad7173bb62cf99419d8e"} Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.793867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba2a4e38d6d4161550bbd716138c671207c9559c8804246343371fa26b67b36f"} Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.795226 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d06b5589947999bebc7f6c35dcdda98551733e34f1c1637a27f074005dd44b7a"} Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.796081 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ab790ee0d8868a46ecdb3e49d9817e198f0e21339ccb7f2cff5afcc9351f3d63"} Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.865192 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.865273 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:10 crc kubenswrapper[4739]: W0121 15:26:10.048807 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:10 crc kubenswrapper[4739]: E0121 15:26:10.048932 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:10 crc kubenswrapper[4739]: E0121 15:26:10.121295 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="1.6s" Jan 21 15:26:10 crc kubenswrapper[4739]: W0121 15:26:10.155186 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:10 crc kubenswrapper[4739]: E0121 15:26:10.155267 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.579863 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.582051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.582117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.582134 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.582180 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:10 crc kubenswrapper[4739]: E0121 15:26:10.583126 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.714262 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.719664 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:21:50.726967273 +0000 UTC Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.800004 4739 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189" exitCode=0 Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.800120 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.800108 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.801609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.801659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.801669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.804965 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.804944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805081 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805111 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.806445 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785" exitCode=0 Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.806514 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.806563 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.807364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.807388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.807399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.822260 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.823659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.823690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.823700 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.823898 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d" exitCode=0 Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.824021 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.824165 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.825544 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.825571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.825580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.828026 4739 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3" exitCode=0 Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.828089 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.828161 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.834303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.834354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.834369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.845596 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:26:10 crc kubenswrapper[4739]: E0121 15:26:10.846684 4739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.715054 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.720404 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 19:27:04.169503073 +0000 UTC Jan 21 15:26:11 crc kubenswrapper[4739]: E0121 15:26:11.721948 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="3.2s" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.769420 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.830404 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.831377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.831410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.831419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:12 crc kubenswrapper[4739]: W0121 15:26:12.149051 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:12 crc kubenswrapper[4739]: E0121 15:26:12.149129 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.183608 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.185080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.185124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.185137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.185163 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:12 crc kubenswrapper[4739]: E0121 15:26:12.185614 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Jan 21 15:26:12 crc kubenswrapper[4739]: W0121 15:26:12.529064 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:12 crc kubenswrapper[4739]: E0121 15:26:12.529149 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.714424 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.720747 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:34:11.556530351 +0000 UTC Jan 21 15:26:12 crc kubenswrapper[4739]: W0121 15:26:12.845881 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:12 crc kubenswrapper[4739]: E0121 15:26:12.846215 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.846846 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75"} Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.849436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd"} Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.849551 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.850709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.850735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.850743 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.855042 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd"} Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.857437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e"} Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.857568 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.858688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.858753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.858769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:13 crc kubenswrapper[4739]: W0121 15:26:13.036091 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:13 crc kubenswrapper[4739]: E0121 15:26:13.036180 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.715028 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.723665 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:54:03.430492689 +0000 UTC Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.861763 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35"} Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.861851 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513"} Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.864264 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e"} Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.866038 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75" exitCode=0 Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.866091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75"} Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.866160 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.866173 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867183 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867211 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867225 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.724668 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:56:50.5832448 +0000 UTC Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.871892 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2"} Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.873944 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.874378 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057"} Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.874695 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.874723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.874733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.030113 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.386564 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.388165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.388221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.388234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.388262 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.724894 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:10:30.844896494 +0000 UTC Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.878352 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057" exitCode=0 Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.878431 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057"} Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.878472 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.879315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.879345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.879358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.881887 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77"} Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.881912 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec"} Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.882047 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.882808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.882858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.882866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.645871 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.646186 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.647793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.648555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.648593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.725882 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:31:51.593883988 +0000 UTC Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891214 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f"} Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891287 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545"} Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93"} Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891317 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c"} Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891324 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891395 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.892215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.892256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.892266 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.955863 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.956046 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.957621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.957680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.957699 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.673944 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.726229 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:33:44.4181055 +0000 UTC Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.894438 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.894661 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.895788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.895875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.895894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.896530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.896583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.896601 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.007664 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.326524 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.727323 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:32:16.84050346 +0000 UTC Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.896541 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.896542 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:19 crc kubenswrapper[4739]: E0121 15:26:19.079991 4739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.571312 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.646640 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.646780 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.728231 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:59:46.466503212 +0000 UTC Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.902975 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f"} Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.903013 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.903153 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:20 crc kubenswrapper[4739]: I0121 15:26:20.728972 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:19:58.751456405 +0000 UTC Jan 21 15:26:20 crc kubenswrapper[4739]: I0121 15:26:20.904919 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:20 crc kubenswrapper[4739]: I0121 15:26:20.905691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:20 crc kubenswrapper[4739]: I0121 15:26:20.905739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:20 crc kubenswrapper[4739]: I0121 15:26:20.905750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.730028 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:14:39.762924517 +0000 UTC Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.774175 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.774329 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.775713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.775831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.775911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.835416 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.835912 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.837010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.837080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.837093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.698695 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.699375 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.700549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.700577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.700588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.731069 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:47:05.917823271 +0000 UTC Jan 21 15:26:23 crc kubenswrapper[4739]: I0121 15:26:23.731520 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:44:52.34994379 +0000 UTC Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.310254 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.310464 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.311644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.311687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.311698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.716076 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.731642 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:37:39.970001907 +0000 UTC Jan 21 15:26:24 crc kubenswrapper[4739]: E0121 15:26:24.923098 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 21 15:26:25 crc kubenswrapper[4739]: E0121 15:26:25.032069 4739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 15:26:25 crc kubenswrapper[4739]: E0121 15:26:25.389805 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 15:26:25 crc kubenswrapper[4739]: I0121 15:26:25.732886 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:25:58.618809686 +0000 UTC Jan 21 15:26:25 crc kubenswrapper[4739]: W0121 15:26:25.732951 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 15:26:25 crc kubenswrapper[4739]: I0121 15:26:25.733051 4739 trace.go:236] Trace[1410109596]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:26:15.731) (total time: 10001ms): Jan 21 15:26:25 crc kubenswrapper[4739]: Trace[1410109596]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:26:25.732) Jan 21 15:26:25 crc kubenswrapper[4739]: Trace[1410109596]: [10.001899641s] [10.001899641s] END Jan 21 15:26:25 crc kubenswrapper[4739]: E0121 15:26:25.733074 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 15:26:26 crc kubenswrapper[4739]: W0121 15:26:26.078676 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.078762 4739 trace.go:236] Trace[1581342166]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:26:16.077) (total time: 10001ms): Jan 21 15:26:26 crc kubenswrapper[4739]: Trace[1581342166]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:26:26.078) Jan 21 15:26:26 crc kubenswrapper[4739]: Trace[1581342166]: [10.001577371s] [10.001577371s] END Jan 21 15:26:26 crc kubenswrapper[4739]: E0121 15:26:26.078783 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.102049 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.102141 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.153614 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.153673 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.733875 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:26:41.206061509 +0000 UTC Jan 21 15:26:27 crc kubenswrapper[4739]: I0121 15:26:27.734625 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 07:28:39.714335166 +0000 UTC Jan 21 15:26:28 crc kubenswrapper[4739]: I0121 15:26:28.736449 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:49:57.628944965 +0000 UTC Jan 21 15:26:29 crc kubenswrapper[4739]: E0121 15:26:29.080184 4739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.646767 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.646849 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.737350 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:03:59.483225999 +0000 UTC Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.892668 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.893093 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.893481 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.893540 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.894286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.894315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.894326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.897966 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.931367 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.931787 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.932139 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.933269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.933315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.933330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:30 crc kubenswrapper[4739]: I0121 15:26:30.738237 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:40:40.236535265 +0000 UTC Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.116526 4739 trace.go:236] Trace[1770909743]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:26:16.549) (total time: 14567ms): Jan 21 15:26:31 crc kubenswrapper[4739]: Trace[1770909743]: ---"Objects listed" error: 14567ms (15:26:31.116) Jan 21 15:26:31 crc kubenswrapper[4739]: Trace[1770909743]: [14.567149841s] [14.567149841s] END Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.116569 4739 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.117132 4739 trace.go:236] Trace[1224731557]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:26:17.357) (total time: 13759ms): Jan 21 15:26:31 crc kubenswrapper[4739]: Trace[1224731557]: ---"Objects listed" error: 13759ms (15:26:31.117) Jan 21 15:26:31 crc kubenswrapper[4739]: Trace[1224731557]: [13.759524563s] [13.759524563s] END Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.117158 4739 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.119040 4739 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.639728 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53878->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.639792 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53878->192.168.126.11:17697: read: connection reset by peer" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.739040 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:34:11.622470426 +0000 UTC Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.790274 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.791466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.791495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.791508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.791606 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.032873 4739 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.033521 4739 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.033615 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.037965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.038064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.038125 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.038189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.038248 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.051731 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.055869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.055913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.055927 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.055948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.055966 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.070301 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.075424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.075600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.075690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.075765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.075847 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.086343 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.090048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.090160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.090223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.090291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.090373 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.102736 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.103077 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.103174 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.204018 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.304774 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.405359 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.505982 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.607109 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.708124 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.740134 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:22:55.521368079 +0000 UTC Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.765982 4739 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.811457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.811511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.811522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.811540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.811551 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.913919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.913974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.913989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.914014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.914029 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.942259 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.944225 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77" exitCode=255 Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.944284 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77"} Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.973755 4739 scope.go:117] "RemoveContainer" containerID="7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.019616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.019648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.019657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.019673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.019697 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.077732 4739 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.122143 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.122178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.122187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.122201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.122211 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.224117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.224173 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.224184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.224198 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.224208 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.326793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.326859 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.326871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.326889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.326903 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.400972 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.418199 4739 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.429888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.429947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.429963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.429984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.430000 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.438850 4739 csr.go:261] certificate signing request csr-84q44 is approved, waiting to be issued Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.445251 4739 csr.go:257] certificate signing request csr-84q44 is issued Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.532747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.532790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.532835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.532864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.532875 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.635179 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.635225 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.635236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.635256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.635268 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.725969 4739 apiserver.go:52] "Watching apiserver" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.729464 4739 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730024 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xlqds","openshift-multus/multus-mqkjd","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-t4z5x","openshift-dns/node-resolver-ppn47","openshift-kube-apiserver/kube-apiserver-crc","openshift-multus/multus-additional-cni-plugins-qhmsr","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730422 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730540 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730568 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.730621 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730540 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730629 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730858 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.730979 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.731020 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.731098 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.731420 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.731689 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.731720 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.731739 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740211 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740426 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:24:55.15301929 +0000 UTC Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.745796 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.745892 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.745954 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746111 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746141 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746129 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746210 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.745895 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746181 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746516 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746534 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746560 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746625 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746674 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746713 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746736 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746786 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746833 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746842 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746836 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746876 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747209 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747231 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747249 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747431 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747794 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747854 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747942 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747972 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.748004 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.750284 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.790344 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.809274 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.818664 4739 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.822871 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837320 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837358 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837540 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837575 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837601 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837625 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837702 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837740 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837774 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837801 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837846 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837869 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.837936 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:34.337879986 +0000 UTC m=+26.028586240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838043 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838126 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838201 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838231 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838286 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838467 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838488 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838613 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838669 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838700 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838731 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838761 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838786 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838809 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838865 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838889 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838918 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838945 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838973 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839001 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839025 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839049 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839074 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839144 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839174 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839230 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839258 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839317 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839348 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839373 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839397 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839425 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838549 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838684 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838754 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838917 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839004 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839070 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839231 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839393 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839420 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839436 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839451 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839450 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839629 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839654 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839676 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839678 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839694 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839723 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839743 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839744 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839763 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839787 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839870 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839894 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839896 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839916 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839948 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839967 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840004 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840027 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840043 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840065 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840119 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840087 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840199 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840221 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840247 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840271 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840285 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840292 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840334 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840340 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840477 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840502 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840513 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840548 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840583 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840596 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840625 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840658 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840661 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840686 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840719 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840755 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840782 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840831 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840863 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840865 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840889 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840892 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840954 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841026 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841051 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841073 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841091 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841126 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841144 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841161 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841179 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841224 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841244 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841261 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841279 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841297 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841313 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841330 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841349 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841372 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841410 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841429 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841484 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841504 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841524 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841542 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841569 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841625 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841647 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841673 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841699 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841719 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841738 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841756 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841775 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841793 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842406 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842446 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842464 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842483 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842502 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842522 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842540 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842559 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842577 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842598 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842616 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842634 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842689 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842706 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842745 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842766 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842785 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842804 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842842 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842860 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842881 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842900 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842918 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842939 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842959 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842979 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842997 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843474 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843508 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843526 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843545 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843562 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843579 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843600 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843618 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843633 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843672 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843688 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843706 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843742 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843758 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843776 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843880 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843941 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844013 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844034 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844054 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844075 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844101 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844119 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844137 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844154 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844173 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844189 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844222 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844240 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844262 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844281 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844298 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844316 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844333 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844351 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844368 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844384 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844401 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844468 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27db8291-09f3-4bd0-ac00-38c091cdd4ec-proxy-tls\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844490 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844506 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844527 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cnibin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844544 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-k8s-cni-cncf-io\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844566 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844600 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-cnibin\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844617 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-os-release\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844639 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844659 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27db8291-09f3-4bd0-ac00-38c091cdd4ec-mcd-auth-proxy-config\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844683 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844701 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjsq2\" (UniqueName: \"kubernetes.io/projected/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-kube-api-access-vjsq2\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844737 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844756 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844790 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-etc-kubernetes\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844806 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjcs8\" (UniqueName: \"kubernetes.io/projected/38471118-ae5e-4d28-87b8-c3a5c6cc5267-kube-api-access-gjcs8\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844856 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844873 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-socket-dir-parent\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844889 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844911 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844927 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-conf-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844943 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844961 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-daemon-config\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845020 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845036 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-os-release\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845051 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-netns\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845069 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845085 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42sj7\" (UniqueName: \"kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845104 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845120 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-hosts-file\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-multus\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845156 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845172 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845194 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-bin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845210 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845226 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-binary-copy\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845242 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clr8\" (UniqueName: \"kubernetes.io/projected/00052cea-471e-4680-b514-6affa734c6ad-kube-api-access-5clr8\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845261 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-multus-certs\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-system-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845294 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnqrh\" (UniqueName: \"kubernetes.io/projected/27db8291-09f3-4bd0-ac00-38c091cdd4ec-kube-api-access-dnqrh\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845328 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-kubelet\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-hostroot\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845362 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845377 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845395 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845414 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845430 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845447 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845463 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845480 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845497 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-system-cni-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845515 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cni-binary-copy\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845532 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/27db8291-09f3-4bd0-ac00-38c091cdd4ec-rootfs\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845553 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845589 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845604 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845623 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845688 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845700 4739 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845711 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845721 4739 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845731 4739 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845743 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845753 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845765 4739 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845777 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845788 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845800 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845811 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845835 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845845 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845857 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845868 4739 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845878 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845889 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845899 4739 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845912 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845923 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845935 4739 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845954 4739 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845973 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845987 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846001 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846013 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.850372 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861395 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858893 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.863948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861271 4739 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.875570 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841171 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841178 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841286 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841496 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841806 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843913 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844107 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844099 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844607 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844883 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844903 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845071 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845193 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.876811 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845470 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845427 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845501 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845601 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845747 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845758 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846024 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846076 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846091 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846284 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846326 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846406 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846568 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.847273 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.847059 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.848107 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.848266 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.848611 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.849130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.849595 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.850057 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.850974 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.851154 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.851285 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.851345 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.851572 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.851839 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852029 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852032 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852167 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852298 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852369 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852669 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.855215 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.855579 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.855842 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.856151 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.857562 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.857798 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858107 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858177 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858550 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858792 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.860125 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.860180 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861674 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.862009 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.863010 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.864538 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.864855 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.864907 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.865197 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.865373 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.865798 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.865960 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.866365 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.866351 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.866679 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.866912 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.867500 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.867712 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.868049 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.869932 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.870368 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.870862 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.871098 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.871291 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.871589 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.871838 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872065 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872259 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872448 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872611 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872771 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.873161 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872973 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.873876 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.875433 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.875529 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.875652 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.875901 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.876037 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.876318 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.876453 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.877163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.877542 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.879062 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.879242 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.880383 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.880712 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.881037 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.881332 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.882663 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.882852 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.883057 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.883061 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.883439 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.883991 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.884273 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.885317 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.879514 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.886569 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.886575 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.886949 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.887243 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.888112 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.888809 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.889074 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:34.389018809 +0000 UTC m=+26.079725273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.889526 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:34.389515131 +0000 UTC m=+26.080221605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.889573 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.889653 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.889675 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.889720 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:34.389711516 +0000 UTC m=+26.080417970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.890075 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.890796 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.891256 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.891357 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.892158 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.892922 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.893639 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.893728 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.894495 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.894749 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.895143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.895384 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.895475 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.895730 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.896139 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.896298 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.896542 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.896550 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.896607 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.897089 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.897316 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.912861 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.913506 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.913633 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.913951 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.914333 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.914962 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.915932 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.916083 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.916162 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.917332 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.918449 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.918543 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.919028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.920625 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.920778 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.921459 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.924553 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.925510 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.937280 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.938194 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.947641 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.947688 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.947708 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.947786 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:34.447754665 +0000 UTC m=+26.138460929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948691 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948735 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-conf-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948809 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948850 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-os-release\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948880 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948924 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-daemon-config\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948959 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948994 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42sj7\" (UniqueName: \"kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-hosts-file\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949061 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-netns\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949097 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949134 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-bin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-multus\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949173 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-binary-copy\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949207 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clr8\" (UniqueName: \"kubernetes.io/projected/00052cea-471e-4680-b514-6affa734c6ad-kube-api-access-5clr8\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949225 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-multus-certs\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-system-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnqrh\" (UniqueName: \"kubernetes.io/projected/27db8291-09f3-4bd0-ac00-38c091cdd4ec-kube-api-access-dnqrh\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949286 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-hostroot\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949305 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949323 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949341 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-kubelet\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949405 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949439 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949458 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-system-cni-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949524 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cni-binary-copy\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949552 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/27db8291-09f3-4bd0-ac00-38c091cdd4ec-rootfs\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949576 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949612 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949647 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955417 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cnibin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955442 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-k8s-cni-cncf-io\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955466 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27db8291-09f3-4bd0-ac00-38c091cdd4ec-proxy-tls\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955508 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-cnibin\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955527 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-os-release\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955552 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjsq2\" (UniqueName: \"kubernetes.io/projected/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-kube-api-access-vjsq2\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955574 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27db8291-09f3-4bd0-ac00-38c091cdd4ec-mcd-auth-proxy-config\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955652 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-etc-kubernetes\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjcs8\" (UniqueName: \"kubernetes.io/projected/38471118-ae5e-4d28-87b8-c3a5c6cc5267-kube-api-access-gjcs8\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955704 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955737 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-socket-dir-parent\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955948 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955934 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.956122 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-socket-dir-parent\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.956145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-os-release\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955979 4739 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.956218 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.956273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.957141 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.957248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-etc-kubernetes\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.957920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958001 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-bin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958022 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-multus\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958043 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958117 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958163 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958194 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-kubelet\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958230 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958261 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958351 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-system-cni-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958831 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-binary-copy\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959084 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-multus-certs\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959140 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-system-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959139 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cni-binary-copy\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/27db8291-09f3-4bd0-ac00-38c091cdd4ec-rootfs\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959209 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959302 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-hostroot\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959375 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-conf-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962176 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962389 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962595 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962681 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-hosts-file\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-netns\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.963352 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-daemon-config\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27db8291-09f3-4bd0-ac00-38c091cdd4ec-mcd-auth-proxy-config\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964318 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964875 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964913 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cnibin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-k8s-cni-cncf-io\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964950 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964981 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-cnibin\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.965007 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.965067 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-os-release\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968031 4739 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968082 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968097 4739 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968108 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968127 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968074 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968138 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968236 4739 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968255 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968268 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968281 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968295 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968305 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968316 4739 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968325 4739 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968335 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968345 4739 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968355 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968370 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968381 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968392 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968422 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968434 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968445 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968456 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968466 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968476 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968486 4739 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968496 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968505 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968515 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968525 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968538 4739 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968550 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968563 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968573 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968582 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968591 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968601 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968610 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968621 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968633 4739 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968644 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968654 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968663 4739 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968674 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968683 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968675 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968694 4739 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968745 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968760 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968772 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968783 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968794 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968806 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968851 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968862 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968876 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968894 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968907 4739 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968917 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968932 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968945 4739 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968959 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968975 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968989 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969003 4739 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969014 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969024 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969035 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969045 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969056 4739 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969068 4739 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969078 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969089 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969099 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969109 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969120 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969131 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969143 4739 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969156 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969166 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969176 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969187 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969198 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969208 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969219 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969230 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969241 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969253 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969265 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969277 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969286 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969296 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969306 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969318 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969328 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969338 4739 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969350 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969360 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969371 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969380 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969392 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969404 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969414 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969425 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969435 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969445 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969457 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969467 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969478 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969492 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969502 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969511 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969523 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969534 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969545 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969554 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969565 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969577 4739 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969587 4739 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969597 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969607 4739 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969617 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969628 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969660 4739 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969672 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969684 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969694 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969705 4739 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969714 4739 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969725 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969735 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969746 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969756 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969766 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969776 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.970202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.970233 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.970243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.970262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.970274 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969785 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976916 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976931 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976942 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976952 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976963 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976976 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976986 4739 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976998 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977009 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977022 4739 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977033 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977045 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977053 4739 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977146 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977171 4739 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977189 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977205 4739 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977220 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977236 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977515 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27db8291-09f3-4bd0-ac00-38c091cdd4ec-proxy-tls\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.978784 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.985667 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.986858 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.987784 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.990374 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.990849 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.993725 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.995855 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42sj7\" (UniqueName: \"kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.999206 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.001263 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.009570 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjsq2\" (UniqueName: \"kubernetes.io/projected/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-kube-api-access-vjsq2\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.026484 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clr8\" (UniqueName: \"kubernetes.io/projected/00052cea-471e-4680-b514-6affa734c6ad-kube-api-access-5clr8\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.019048 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.028300 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjcs8\" (UniqueName: \"kubernetes.io/projected/38471118-ae5e-4d28-87b8-c3a5c6cc5267-kube-api-access-gjcs8\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.028439 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.037864 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.043191 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.043797 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnqrh\" (UniqueName: \"kubernetes.io/projected/27db8291-09f3-4bd0-ac00-38c091cdd4ec-kube-api-access-dnqrh\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.058611 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.064957 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078593 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078643 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078658 4739 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078673 4739 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078690 4739 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078719 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078733 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078745 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078757 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078771 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.079884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.079981 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.080037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.080105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.080184 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.081674 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.083727 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.092268 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mqkjd" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.100544 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.106351 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.111094 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.114498 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.125342 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.135478 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.153321 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: W0121 15:26:34.163254 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-bb6ec064ad90136b6318e0d9e2e5279078d5433c2343d648dadab8ea22d12ed1 WatchSource:0}: Error finding container bb6ec064ad90136b6318e0d9e2e5279078d5433c2343d648dadab8ea22d12ed1: Status 404 returned error can't find the container with id bb6ec064ad90136b6318e0d9e2e5279078d5433c2343d648dadab8ea22d12ed1 Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.165923 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.177730 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.189012 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.189065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.189079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.189103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.189117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.197552 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.225333 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.249756 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.273390 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.294612 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.309178 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.324831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.324867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.324876 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.324894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.324905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.331269 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.374074 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.383480 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.383746 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:35.383729308 +0000 UTC m=+27.074435572 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.384050 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.411266 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.419302 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.424477 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.438961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.438991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.439000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.439017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.439029 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.448274 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 15:21:33 +0000 UTC, rotation deadline is 2026-10-10 18:48:45.374571411 +0000 UTC Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.448341 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6291h22m10.926233218s for next certificate rotation Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.448510 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.470205 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.487326 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.487390 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.487409 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.487434 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487580 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487598 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487610 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487670 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:35.487652714 +0000 UTC m=+27.178358978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487723 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487731 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487738 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487765 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:35.487751577 +0000 UTC m=+27.178457841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.488486 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.488522 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:35.488513196 +0000 UTC m=+27.179219460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.488859 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.488886 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:35.488878846 +0000 UTC m=+27.179585110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.490125 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.529398 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.542483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.542542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.542557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.542577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.542591 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.546914 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.558940 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.567877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.584757 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.617520 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.632279 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.646314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.646359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.646372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.646389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.646403 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.647840 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.672360 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.689995 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.712112 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.728948 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.740592 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:19:00.00730111 +0000 UTC Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.749188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.749689 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.749786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.749890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.749969 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.787897 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.789107 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.790087 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.790955 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.792572 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.793325 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.794641 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.795505 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.796897 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.797626 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.798835 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.799775 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.801082 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.801800 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.802594 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.803759 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.804629 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.805683 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.806480 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.807319 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.808530 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.809330 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.809982 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.811420 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.812027 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.813422 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.814391 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.820749 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.821650 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.823702 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.824401 4739 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.824646 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.827279 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.828062 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.828728 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.831143 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.832493 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.833286 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.834671 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.835784 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.837103 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.837947 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.839301 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.840660 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.841362 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.842575 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.843395 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.844997 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.845731 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.846476 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.847595 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.848455 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.849727 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.850409 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.858187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.858253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.858264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.858288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.858299 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.961262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.961300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.961309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.961328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.961338 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.990502 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.990566 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bb6ec064ad90136b6318e0d9e2e5279078d5433c2343d648dadab8ea22d12ed1"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.992149 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerStarted","Data":"851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.992178 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerStarted","Data":"d75ecc673914d62b75e0f56fcea114a20f8b9e2b96f3c609d58b75a72db4a10b"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.993625 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a7ca3303b7e3a917e7416d98a8180614463a788e53597becc4bf40ec23d11e0d"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.994706 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.994726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.994737 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0594563123e1c326effeec6ba21a04f23fe4d9004197dadfb02a65dbeb5573a8"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.996574 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a" exitCode=0 Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.996639 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.996663 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"0aeeca19fcaed84c23a97affb5713825fb8fa16e6d2cae9b568c96f1ffdd5b82"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.007844 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246" exitCode=0 Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.007950 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.007995 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerStarted","Data":"553b4222393fc78ab126d92719cf4b6b687bd357ca8d5b7bbbfd4a230a24fafe"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.011524 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.018182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ppn47" event={"ID":"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f","Type":"ContainerStarted","Data":"f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.018237 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ppn47" event={"ID":"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f","Type":"ContainerStarted","Data":"5ac176c2bd0750cd304405cf565c4459d9ef3fcd9a81bf0a81cb2e5ae52bda52"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.020310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.020363 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.020376 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"8794a32c9efe67c2f935fb77c1f977236743bb55d779dc3dec33a7a02dc47820"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.035684 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.055689 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.064625 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.064662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.064673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.064690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.064702 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.072199 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.090161 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.111726 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.167794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.167846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.167855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.167870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.167881 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.179976 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.214806 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.255042 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.270751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.270794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.270803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.270833 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.270845 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.288435 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.306177 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.322129 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.345404 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.360230 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.373954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.374000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.374011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.374028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.374041 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.374263 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.393628 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.399771 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.400331 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:37.400294687 +0000 UTC m=+29.091000951 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.412708 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.434573 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.458001 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.474634 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.478557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.478602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.478614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.478633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.478646 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.501575 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.501625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.501655 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.501672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501799 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501883 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501927 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501945 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501952 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501883 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501997 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.502007 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501912 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:37.501894335 +0000 UTC m=+29.192600599 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.502116 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:37.502043319 +0000 UTC m=+29.192749723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.502147 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:37.502137331 +0000 UTC m=+29.192843795 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.502189 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:37.502158881 +0000 UTC m=+29.192865355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.504383 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.521926 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.546567 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.572708 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.583430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.583514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.583534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.583589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.583606 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.599903 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.612598 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.687017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.687076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.687090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.687140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.687155 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.741052 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:50:50.663460289 +0000 UTC Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.782667 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.782897 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.783388 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.783690 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.783942 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.784154 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.790331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.790385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.790397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.790416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.790430 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.893979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.894564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.894577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.894597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.894610 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.997762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.997802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.997832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.997853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.997868 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.028356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.028778 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.028893 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.030838 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4" exitCode=0 Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.030919 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.052637 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.070038 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.084041 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.099527 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.101661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.101714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.101730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.101775 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.101789 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.113522 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.133651 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.205704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.205756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.205766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.205786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.205805 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.309937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.309998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.310014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.310063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.310084 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.414049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.414089 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.414105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.414127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.414138 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.575476 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.595811 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.615714 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.637343 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.642475 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.642784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.642924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.643058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.643143 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.657672 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.668691 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.677872 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.679301 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.694327 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.710915 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.730549 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.741771 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:46:42.973202951 +0000 UTC Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746649 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746780 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.765215 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.849421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.849491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.849507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.849535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.849554 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.851855 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.872708 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.922403 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.952993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.953044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.953057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.953075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.953089 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.960702 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.994569 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.020050 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.038697 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.050965 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.056919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.056978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.056993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.057015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.057030 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.071509 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.090870 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.112273 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.137105 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.159619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.159681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.159694 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.159742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.159755 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.180101 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8zn2s"] Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.180621 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.183622 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.183957 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.184620 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.185642 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.198979 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.216223 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.242812 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.248076 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f22c949-cafc-4c90-af3b-a0c01843b8c1-host\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.248132 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f22c949-cafc-4c90-af3b-a0c01843b8c1-serviceca\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.248183 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whwv\" (UniqueName: \"kubernetes.io/projected/4f22c949-cafc-4c90-af3b-a0c01843b8c1-kube-api-access-4whwv\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.262212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.262270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.262281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.262301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.262685 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.263363 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.281523 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.297343 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.312885 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.330425 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.349070 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f22c949-cafc-4c90-af3b-a0c01843b8c1-host\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.349136 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f22c949-cafc-4c90-af3b-a0c01843b8c1-serviceca\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.349172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4whwv\" (UniqueName: \"kubernetes.io/projected/4f22c949-cafc-4c90-af3b-a0c01843b8c1-kube-api-access-4whwv\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.349255 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f22c949-cafc-4c90-af3b-a0c01843b8c1-host\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.350839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f22c949-cafc-4c90-af3b-a0c01843b8c1-serviceca\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.356329 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.365677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.365731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.365740 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.365757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.365769 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.374115 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4whwv\" (UniqueName: \"kubernetes.io/projected/4f22c949-cafc-4c90-af3b-a0c01843b8c1-kube-api-access-4whwv\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.377949 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.392272 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.407967 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.428706 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.442459 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.449543 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.449771 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:41.449739321 +0000 UTC m=+33.140445585 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.458752 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.468657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.468716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.468728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.468750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.468766 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.494281 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.550920 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.551503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.551532 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.551562 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551126 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551604 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551653 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:41.551631387 +0000 UTC m=+33.242337641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551670 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:41.551663117 +0000 UTC m=+33.242369381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551718 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551734 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551748 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551782 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:41.55177191 +0000 UTC m=+33.242478174 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551899 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551954 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551977 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.552071 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:41.552042987 +0000 UTC m=+33.242749411 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.575522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.575572 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.575587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.575609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.575622 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.680003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.680049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.680064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.680087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.680104 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.742775 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:05:18.214414514 +0000 UTC Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.781870 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.781993 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.782054 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.782087 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.782223 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.782316 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.783605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.783643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.783655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.783671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.783684 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.887190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.887279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.887296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.887320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.887356 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.989938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.989983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.989994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.990018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.990029 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.044295 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8zn2s" event={"ID":"4f22c949-cafc-4c90-af3b-a0c01843b8c1","Type":"ContainerStarted","Data":"a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.044381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8zn2s" event={"ID":"4f22c949-cafc-4c90-af3b-a0c01843b8c1","Type":"ContainerStarted","Data":"f96291527f818502ba9d41555e4273acbeb3b1fb57bed1fd27fa625f2fd15f3f"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.047641 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40" exitCode=0 Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.047711 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.053144 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.053215 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.078306 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.095961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.097152 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.097208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.097278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.097293 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.098039 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.119796 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.146024 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.207378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.207430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.207442 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.207462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.207475 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.246016 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.278092 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.292631 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.305539 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.310020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.310071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.310082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.310099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.310111 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.320099 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.335338 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.349161 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.362845 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.383718 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.398329 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.411314 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.413580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.413630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.413639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.413657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.413671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.423952 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.438311 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.453629 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.466246 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.477622 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.490760 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.504234 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.516840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.516899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.516909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.516926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.516938 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.518627 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.534201 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.552686 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.566924 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.583412 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.595307 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.608319 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.619486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.619551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.619565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.619589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.619606 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.625335 4739 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.625997 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd/pods/etcd-crc/status\": read tcp 38.102.83.224:38888->38.102.83.224:6443: use of closed network connection" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.722979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.723043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.723057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.723081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.723099 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.743348 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:34:28.617299872 +0000 UTC Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.794539 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.822356 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.826427 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.826782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.826915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.826998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.827081 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.843093 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.858161 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.872300 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.888300 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.907994 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.930473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.930573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.930593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.930615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.930629 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.933655 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.949419 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.963342 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.975461 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.988890 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.007713 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.021153 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.033677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.033717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.033727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.033746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.033762 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.035375 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.058179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.061854 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58" exitCode=0 Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.061908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.090209 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.112116 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.131890 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.138300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.138362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.138377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.138396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.138408 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.146707 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.160238 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.173877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.193973 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.208104 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.222377 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.238367 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.241265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.241313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.241327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.241351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.241369 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.252309 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.265657 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.285985 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.318642 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.345167 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.345215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.345242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.345260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.345272 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.358756 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.399558 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.440290 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.448356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.448425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.448441 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.448462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.448515 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.479115 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.518419 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.551337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.551379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.551389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.551404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.551416 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.560063 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.607392 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.636054 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.654872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.654920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.654932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.654951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.654965 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.686224 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.720567 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.744086 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:17:17.014796931 +0000 UTC Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.758434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.758476 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.758487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.758505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.758517 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.760989 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.782282 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.782299 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:39 crc kubenswrapper[4739]: E0121 15:26:39.783024 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:39 crc kubenswrapper[4739]: E0121 15:26:39.783150 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.782374 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:39 crc kubenswrapper[4739]: E0121 15:26:39.783445 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.796592 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.837262 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.861027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.861069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.861079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.861095 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.861107 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.878240 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.914880 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.953721 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.964173 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.964219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.964231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.964249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.964264 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.066674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.066718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.066730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.066747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.066758 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.072932 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.077840 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.077832 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9" exitCode=0 Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.091336 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.110417 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.125891 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.140586 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.157763 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.170965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.171010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.171025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.171044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.171056 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.197095 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.245312 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.276596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.276661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.276672 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.276691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.276717 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.285539 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.318151 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.358722 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.379336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.379376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.379394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.379416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.379427 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.397632 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.437469 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.479451 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.487396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.487446 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.487457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.487477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.487489 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.518505 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.558084 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.590532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.590605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.590617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.590640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.590653 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.694247 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.694314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.694332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.694353 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.694367 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.745029 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:41:25.695457793 +0000 UTC Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.796933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.797012 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.797040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.797072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.797105 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.900925 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.901327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.901408 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.901526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.901606 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.004342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.004764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.004883 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.004976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.005041 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.107456 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.107859 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.108016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.108122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.108231 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.211526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.211612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.211638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.211675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.211700 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.315243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.315321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.315344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.315376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.315399 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.419844 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.419902 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.419920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.419946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.419962 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.498251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.498965 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:49.498936975 +0000 UTC m=+41.189643249 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.522604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.522651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.522665 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.522687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.522699 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.600145 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.600235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.600294 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.600331 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600383 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600404 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600420 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600433 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600442 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600449 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600473 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600539 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:49.600507751 +0000 UTC m=+41.291214025 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600562 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:49.600551102 +0000 UTC m=+41.291257376 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600590 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:49.600582303 +0000 UTC m=+41.291288577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600615 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600811 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:49.600766788 +0000 UTC m=+41.291473082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.626102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.626172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.626194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.626219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.626232 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.730074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.730136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.730148 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.730170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.730187 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.745391 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:35:06.104884479 +0000 UTC Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.782596 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.782623 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.782714 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.782771 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.782939 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.783044 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.833075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.833136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.833153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.833181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.833196 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.937732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.937778 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.937791 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.937833 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.937849 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.041370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.041431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.041445 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.041467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.041481 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.145773 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.145839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.145851 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.145871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.145883 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.236854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.236897 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.236906 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.236922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.236933 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.248890 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.256118 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.256181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.256195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.256215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.256230 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.269483 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.273451 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.273485 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.273494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.273510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.273520 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.285051 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.289027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.289117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.289131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.289159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.289173 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.302518 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.307833 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.307893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.307905 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.307928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.307943 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.322012 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.322726 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.326375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.327129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.327190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.327214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.327226 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.430864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.430909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.430921 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.430939 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.430960 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.533869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.534328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.534413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.534510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.534592 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.638250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.638299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.638313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.638334 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.638348 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.741884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.741931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.741945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.741963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.741974 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.746389 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:36:21.789862728 +0000 UTC Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.845260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.845296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.845308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.845326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.845339 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.948319 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.948347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.948358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.948376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.948391 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.051714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.051753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.051763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.051781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.051792 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.093219 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerStarted","Data":"134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.098594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.111325 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.124782 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.138282 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154018 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154285 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154384 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.158737 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.188356 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.211096 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.237024 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.257015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.257073 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.257088 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.257114 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.257133 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.285780 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.301963 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.323876 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.338002 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.354689 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.359736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.359807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.359855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.359897 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.359915 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.373854 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.390829 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.405685 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.422699 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.437505 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.461806 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.463373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.463419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.463432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.463453 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.463465 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.477440 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.495376 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.513661 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.530726 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.544456 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.561481 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.567041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.567091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.567103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.567120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.567165 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.578060 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.595666 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.613883 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.640030 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.657248 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.669677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.669709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.669742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.669764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.669778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.674166 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.746886 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:04:37.910378693 +0000 UTC Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.772576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.772627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.772640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.772658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.772671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.782066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.782066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.782066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:43 crc kubenswrapper[4739]: E0121 15:26:43.782680 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:43 crc kubenswrapper[4739]: E0121 15:26:43.782569 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:43 crc kubenswrapper[4739]: E0121 15:26:43.782758 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.874772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.874864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.874879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.874895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.874906 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.977849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.977896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.977913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.977939 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.977952 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.080589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.080872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.080908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.080932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.080945 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.184857 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.184931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.184949 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.184966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.184977 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.294672 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.294730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.294745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.294768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.294787 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.398341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.398401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.398414 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.398436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.398451 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.501775 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.501910 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.501940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.501973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.501995 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.605440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.605513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.605527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.605550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.605563 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.708220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.708271 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.708302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.708325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.708338 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.747483 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:15:56.269751405 +0000 UTC Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.811961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.812255 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.812349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.812454 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.812553 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.916375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.916424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.916438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.916459 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.916473 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.024650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.025015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.025101 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.025231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.025306 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.110029 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010" exitCode=0 Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.110124 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.111261 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.111448 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.111464 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.126361 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.130030 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.130121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.130137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.130156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.130169 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.141050 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.157567 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.175082 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.191065 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.208140 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.221477 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.233388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.233435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.233449 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.233468 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.233479 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.243348 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.255607 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.275115 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.280311 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.282441 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.305278 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.331973 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.336876 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.336971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.337003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.337021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.337031 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.348422 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.362708 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.379281 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.398269 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.412084 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.426688 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.440403 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.441496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.441530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.441556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.441579 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.441593 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.455127 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.468276 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.487109 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.501102 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.516549 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.531404 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.544575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.544656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.544673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.544701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.544737 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.547916 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.571381 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.586228 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.603773 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.618991 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.647558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.647649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.647662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.647700 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.647714 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.748540 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:36:04.100153981 +0000 UTC Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.750746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.750870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.750888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.750911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.750946 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.781950 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.782120 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:45 crc kubenswrapper[4739]: E0121 15:26:45.782202 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.782265 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:45 crc kubenswrapper[4739]: E0121 15:26:45.782437 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:45 crc kubenswrapper[4739]: E0121 15:26:45.782591 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.854435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.854501 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.854511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.854540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.854551 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.957017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.957087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.957102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.957123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.957135 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.060562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.060616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.060626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.060644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.060660 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.164477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.164712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.164724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.164745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.164761 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.212277 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq"] Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.212847 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: W0121 15:26:46.214901 4739 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: secrets "ovn-kubernetes-control-plane-dockercfg-gs7dd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 15:26:46 crc kubenswrapper[4739]: E0121 15:26:46.214979 4739 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-control-plane-dockercfg-gs7dd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.215931 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.231805 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.258264 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.270556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.270602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.270611 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.270628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.270639 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.277303 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.296898 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.310718 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.324667 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.336561 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.348422 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.357575 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.357617 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhzq8\" (UniqueName: \"kubernetes.io/projected/36eff52d-b31b-4ed6-b48c-62246caf18d5-kube-api-access-rhzq8\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.357658 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.357748 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.361396 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.375651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.375992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.376236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.376309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.376366 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.379934 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.395618 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.412635 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.435233 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.448522 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.459312 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.459362 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhzq8\" (UniqueName: \"kubernetes.io/projected/36eff52d-b31b-4ed6-b48c-62246caf18d5-kube-api-access-rhzq8\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.459411 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.459431 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.460416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.460701 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.465233 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.467078 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.480061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.480098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.480107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.480124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.480133 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.481120 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.482186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhzq8\" (UniqueName: \"kubernetes.io/projected/36eff52d-b31b-4ed6-b48c-62246caf18d5-kube-api-access-rhzq8\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.583531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.583586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.583603 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.583629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.583647 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.686810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.686891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.686902 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.686920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.686931 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.749028 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:53:19.782219981 +0000 UTC Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.788849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.788892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.788904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.788920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.788931 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.892181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.892447 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.892535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.892642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.892771 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.996514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.996554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.996563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.996580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.996592 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.099620 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.099659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.099669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.099684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.099697 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.127054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerStarted","Data":"71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.158714 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" probeResult="failure" output="" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.202732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.202787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.202801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.202846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.202865 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.306067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.306171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.306188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.306246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.306263 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.312977 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.319375 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.329362 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mwzx6"] Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.330196 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.330280 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.348437 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.367038 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.369561 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.369604 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmzm5\" (UniqueName: \"kubernetes.io/projected/b8521870-96a9-4db6-94b3-9f69336d280b-kube-api-access-xmzm5\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.389486 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.404746 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.413963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.414373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.414420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.414439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.414449 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.429721 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.445895 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.460636 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.470161 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.470205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmzm5\" (UniqueName: \"kubernetes.io/projected/b8521870-96a9-4db6-94b3-9f69336d280b-kube-api-access-xmzm5\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.470394 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.470484 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:47.970459686 +0000 UTC m=+39.661165940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.475962 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.489477 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmzm5\" (UniqueName: \"kubernetes.io/projected/b8521870-96a9-4db6-94b3-9f69336d280b-kube-api-access-xmzm5\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.498036 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.510386 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.517828 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.517896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.517914 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.517936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.517951 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.521809 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.533285 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.545555 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.558362 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.569020 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.581578 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.589939 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.621023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.621062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.621072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.621091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.621102 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.723626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.723663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.723678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.723696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.723707 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.749995 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:30:37.01989907 +0000 UTC Jan 21 15:26:47 crc kubenswrapper[4739]: W0121 15:26:47.751888 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36eff52d_b31b_4ed6_b48c_62246caf18d5.slice/crio-08f62da5024ba01795edca3f72edf3b27088180e5645e49388bb2f8134cb09e5 WatchSource:0}: Error finding container 08f62da5024ba01795edca3f72edf3b27088180e5645e49388bb2f8134cb09e5: Status 404 returned error can't find the container with id 08f62da5024ba01795edca3f72edf3b27088180e5645e49388bb2f8134cb09e5 Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.782603 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.782684 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.782603 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.782849 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.782958 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.783029 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.827807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.828038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.828075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.828094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.828109 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.931872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.931931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.931948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.931972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.931988 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.975953 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.976177 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.976296 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:48.976270217 +0000 UTC m=+40.666976481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.035632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.035690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.035702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.035720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.035731 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.132355 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" event={"ID":"36eff52d-b31b-4ed6-b48c-62246caf18d5","Type":"ContainerStarted","Data":"08f62da5024ba01795edca3f72edf3b27088180e5645e49388bb2f8134cb09e5"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.138345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.138383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.138394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.138414 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.138428 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.242374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.242448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.242463 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.242907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.242948 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.346619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.346663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.346675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.346697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.346712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.449691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.449755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.449769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.449803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.449844 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.552549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.552600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.552630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.552652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.552664 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.655395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.655478 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.655493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.655523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.655586 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.750851 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:42:11.774319017 +0000 UTC Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.758364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.758402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.758410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.758425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.758437 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.782870 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:48 crc kubenswrapper[4739]: E0121 15:26:48.783046 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.825788 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.841331 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.856642 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.861191 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.861244 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.861260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.861286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.861324 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.872284 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.886192 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.903237 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.915378 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.930060 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.942276 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.956901 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.964127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.964190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.964200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.964218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.964232 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.970484 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.987605 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:48 crc kubenswrapper[4739]: E0121 15:26:48.987802 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:48 crc kubenswrapper[4739]: E0121 15:26:48.987907 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:50.98788673 +0000 UTC m=+42.678592994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.989714 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.000582 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.015072 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.029842 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.043925 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.060722 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.066898 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.066956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.066977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.066995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.067007 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.137494 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" event={"ID":"36eff52d-b31b-4ed6-b48c-62246caf18d5","Type":"ContainerStarted","Data":"8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.151395 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.165602 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.169506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.169545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.169557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.169578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.169594 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.176122 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.188883 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.201356 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.222996 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.234333 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.247944 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.265680 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.271587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.271617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.271626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.271642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.271651 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.282262 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.300368 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.327705 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.345452 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.359225 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373739 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373931 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.387894 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.408430 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.478265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.478300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.478313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.478331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.478344 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.581745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.581790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.581802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.581851 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.581866 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.595140 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.595440 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:27:05.595418413 +0000 UTC m=+57.286124677 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.684053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.684098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.684109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.684125 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.684136 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.696753 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.696868 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.696899 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.696978 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697101 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697146 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697164 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697175 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697183 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697214 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:05.697188855 +0000 UTC m=+57.387895299 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697315 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:05.697295097 +0000 UTC m=+57.388001361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697329 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:05.697322738 +0000 UTC m=+57.388029002 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697708 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697809 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697906 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.698030 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:05.698011556 +0000 UTC m=+57.388717820 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.752644 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:29:47.291767092 +0000 UTC Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.782124 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.782202 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.782297 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.782335 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.782436 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.782513 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.787806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.787933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.787948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.787969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.787981 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.890731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.890802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.890839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.890866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.890882 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.993706 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.993741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.993751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.993766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.993778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.097245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.097330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.097342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.097364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.097379 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.200519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.200587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.200619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.200644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.200663 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.303768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.303808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.303831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.303846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.303861 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.408075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.408150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.408166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.408194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.408213 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.511336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.511385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.511399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.511417 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.511428 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.614538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.614612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.614622 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.614638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.614649 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.718247 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.718345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.718361 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.718387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.718402 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.753574 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:43:14.024230091 +0000 UTC Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.782325 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:50 crc kubenswrapper[4739]: E0121 15:26:50.782617 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.820795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.820860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.820874 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.820891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.820903 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.923963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.924050 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.924112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.924135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.924164 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.010397 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:51 crc kubenswrapper[4739]: E0121 15:26:51.010564 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:51 crc kubenswrapper[4739]: E0121 15:26:51.010633 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:55.010613904 +0000 UTC m=+46.701320178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.026722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.026761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.026770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.026785 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.026798 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.129936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.129982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.129991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.130009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.130021 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.147010 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" event={"ID":"36eff52d-b31b-4ed6-b48c-62246caf18d5","Type":"ContainerStarted","Data":"b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.174792 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.191048 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.206082 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.222613 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.233419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.233479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.233499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.233521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.233532 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.241016 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.262867 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.281098 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.302922 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.327808 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.337156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.337216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.337230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.337248 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.337265 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.345843 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.359871 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.373662 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.389554 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.408241 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.425418 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.440880 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.441283 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.441300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.441323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.441336 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.449375 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.464300 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.544645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.544722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.544737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.544760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.544774 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.647631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.647735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.647771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.647795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.647809 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.751098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.751145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.751155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.751170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.751182 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.753881 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:23:42.270883229 +0000 UTC Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.781922 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.781957 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.782022 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:51 crc kubenswrapper[4739]: E0121 15:26:51.782196 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:51 crc kubenswrapper[4739]: E0121 15:26:51.782351 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:51 crc kubenswrapper[4739]: E0121 15:26:51.782561 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.855068 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.855129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.855140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.855162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.855174 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.958003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.958079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.958090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.958107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.958117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.061397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.061447 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.061461 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.061477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.061489 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.164917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.164970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.164983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.165004 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.165018 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.268397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.268458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.268477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.268506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.268529 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.372305 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.372363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.372373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.372389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.372400 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.479884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.479950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.479963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.479990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.480004 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.551913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.551968 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.551983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.552006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.552019 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.567197 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:52Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.572299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.572385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.572399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.572418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.572432 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.589313 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:52Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.595958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.596029 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.596045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.596070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.596088 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.613879 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:52Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.620097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.620144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.620156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.620177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.620194 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.634652 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:52Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.639875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.639932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.639944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.639966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.639980 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.655607 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:52Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.656275 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.658649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.658716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.658731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.658755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.658771 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.754430 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:34:06.44359789 +0000 UTC Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.762149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.762217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.762232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.762258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.762273 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.782573 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.782746 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.865185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.865254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.865271 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.865291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.865306 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.968897 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.968968 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.968985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.969013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.969030 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.071385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.071422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.071453 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.071476 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.071487 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.157011 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/0.log" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.160482 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7" exitCode=1 Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.160540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.161355 4739 scope.go:117] "RemoveContainer" containerID="577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.173314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.173348 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.173358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.173373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.173385 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.181670 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.196733 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.213756 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.232459 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.245880 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.258795 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.277583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.278089 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.278188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.278279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.278375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.279535 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.296294 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.310304 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.327440 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.354716 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0121 15:26:52.584694 5923 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:26:52.584938 5923 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585298 5923 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585404 5923 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:26:52.585584 5923 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:26:52.585595 5923 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:26:52.585628 5923 factory.go:656] Stopping watch factory\\\\nI0121 15:26:52.585645 5923 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:26:52.585415 5923 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585874 5923 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:26:52.585886 5923 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.367430 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.381640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.381721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.381738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.381764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.381782 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.383640 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.399000 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.414720 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.436514 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.460187 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.485012 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.485071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.485084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.485105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.485119 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.588738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.588802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.588834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.588858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.588870 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.696971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.697015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.697027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.697047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.697059 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.755268 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:24:36.156106439 +0000 UTC Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.782866 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:53 crc kubenswrapper[4739]: E0121 15:26:53.783050 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.783111 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.783172 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:53 crc kubenswrapper[4739]: E0121 15:26:53.783286 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:53 crc kubenswrapper[4739]: E0121 15:26:53.783461 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.800989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.801037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.801049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.801067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.801079 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.907289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.907364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.907377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.907401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.907419 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.010662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.010714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.010723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.010742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.010756 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.113432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.113490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.113501 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.113522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.113535 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.167051 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/0.log" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.170890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.171478 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.185976 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.199987 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.215136 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.216504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.216529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.216541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.216560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.216573 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.230448 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.243226 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.265362 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.287993 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0121 15:26:52.584694 5923 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:26:52.584938 5923 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585298 5923 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585404 5923 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:26:52.585584 5923 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:26:52.585595 5923 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:26:52.585628 5923 factory.go:656] Stopping watch factory\\\\nI0121 15:26:52.585645 5923 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:26:52.585415 5923 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585874 5923 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:26:52.585886 5923 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.299930 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.316023 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.319278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.319335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.319352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.319644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.319672 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.330146 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.347614 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.370772 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.384377 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.400849 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.414208 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.422646 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.422702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.422713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.422734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.422747 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.427382 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.441285 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.526892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.526949 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.526961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.526980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.526991 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.630083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.630154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.630170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.630192 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.630209 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.732741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.732794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.732804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.732848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.732865 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.755768 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:10:36.862798782 +0000 UTC Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.781885 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:54 crc kubenswrapper[4739]: E0121 15:26:54.782045 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.835195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.835235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.835246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.835261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.835273 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.938513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.938575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.938588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.938606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.938617 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.042472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.042542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.042554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.042582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.042597 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.063385 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.063554 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.063618 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:27:03.063600131 +0000 UTC m=+54.754306395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.145763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.145796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.145803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.145835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.145845 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.176802 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/1.log" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.177434 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/0.log" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.180341 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946" exitCode=1 Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.180374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.180438 4739 scope.go:117] "RemoveContainer" containerID="577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.181443 4739 scope.go:117] "RemoveContainer" containerID="7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946" Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.181734 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.192160 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.202846 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.214898 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.229877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248694 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248594 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.266324 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.278830 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.294150 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.310540 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.331032 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0121 15:26:52.584694 5923 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:26:52.584938 5923 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585298 5923 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585404 5923 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:26:52.585584 5923 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:26:52.585595 5923 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:26:52.585628 5923 factory.go:656] Stopping watch factory\\\\nI0121 15:26:52.585645 5923 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:26:52.585415 5923 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585874 5923 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:26:52.585886 5923 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.345915 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.351552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.351587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.351596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.351613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.351622 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.361238 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.380074 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.392904 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.407402 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.429151 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.441283 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.454157 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.454205 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.454216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.454231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.454241 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.556746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.557226 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.557347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.557457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.557542 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.660528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.660864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.660927 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.660988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.661101 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.756583 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:40:34.51970179 +0000 UTC Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.763969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.764020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.764029 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.764045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.764054 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.782502 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.782517 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.782682 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.782794 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.782517 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.783013 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.866720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.867044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.867129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.867218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.867292 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.970344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.970380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.970389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.970403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.970413 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.075826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.076028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.076067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.076093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.076111 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.179872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.179915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.179924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.179942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.179955 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.184568 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/1.log" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.188494 4739 scope.go:117] "RemoveContainer" containerID="7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946" Jan 21 15:26:56 crc kubenswrapper[4739]: E0121 15:26:56.188639 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.203396 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.215450 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.227497 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.240426 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.251319 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.260494 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.272063 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.282372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.282404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.282415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.282434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.282446 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.285842 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.299196 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.313060 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.329529 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.338533 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.348987 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.360609 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.371297 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.384808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.384885 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.384911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.384933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.384946 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.388158 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.400244 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.487642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.487688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.487702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.487723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.487735 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.590408 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.590457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.590499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.591505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.591521 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.694251 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.694299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.694312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.694330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.694344 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.757898 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:41:50.361476586 +0000 UTC Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.782640 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:56 crc kubenswrapper[4739]: E0121 15:26:56.782809 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.796719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.796772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.796787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.796812 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.796853 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.899506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.899563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.899574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.899595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.899609 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.001564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.001609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.001620 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.001636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.001647 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.104989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.105025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.105035 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.105069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.105079 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.208439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.208487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.208497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.208513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.208523 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.311419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.311523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.311557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.311577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.311589 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.413989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.414027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.414038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.414054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.414067 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.517708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.517767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.517776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.517793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.517804 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.630921 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.631227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.631336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.631480 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.631557 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.733845 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.734202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.734290 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.734369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.734435 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.758445 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:30:50.219925522 +0000 UTC Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.782160 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.782253 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.782271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:57 crc kubenswrapper[4739]: E0121 15:26:57.782701 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:57 crc kubenswrapper[4739]: E0121 15:26:57.782666 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:57 crc kubenswrapper[4739]: E0121 15:26:57.783408 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.837314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.837354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.837367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.837383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.837398 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.939895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.939969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.939990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.940016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.940034 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.042491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.042545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.042560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.042582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.042595 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.144456 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.144488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.144496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.144510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.144519 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.247207 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.247260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.247270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.247287 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.247299 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.350509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.350546 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.350556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.350572 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.350582 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.457371 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.457412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.457421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.457436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.457446 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.559520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.559558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.559566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.559581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.559634 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.662037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.662099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.662112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.662127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.662137 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.759390 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:24:19.865857543 +0000 UTC Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.764617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.764647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.764660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.764680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.764698 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.782481 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:58 crc kubenswrapper[4739]: E0121 15:26:58.782655 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.795476 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.809422 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.821470 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.832523 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.843275 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.859553 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.866589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.866627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.866637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.866654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.866665 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.877665 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.896261 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.914406 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.929186 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.942904 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.954635 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.969723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.969773 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.969787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.969805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.969835 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.970583 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.981627 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.996534 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.007718 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.019264 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.072704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.072747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.072755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.072768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.072777 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.175752 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.175781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.175789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.175801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.175810 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.278524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.278584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.278593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.278608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.278617 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.380431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.380518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.380529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.380561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.380571 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.482741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.482787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.482799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.482836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.482856 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.585407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.585467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.585475 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.585490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.585514 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.687932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.687977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.688002 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.688016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.688026 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.760366 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:40:41.375900333 +0000 UTC Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.782757 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.782841 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:59 crc kubenswrapper[4739]: E0121 15:26:59.782928 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:59 crc kubenswrapper[4739]: E0121 15:26:59.783124 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.783177 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:59 crc kubenswrapper[4739]: E0121 15:26:59.783832 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.790490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.790531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.790540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.790556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.790566 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.892737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.892798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.892842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.892891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.892926 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.995855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.995918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.995932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.995952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.995964 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.098455 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.098505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.098513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.098528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.098539 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.207369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.207431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.207442 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.207462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.207472 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.309708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.309748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.309765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.309780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.309790 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.412672 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.412726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.412739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.412758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.412772 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.515032 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.515065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.515074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.515087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.515095 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.618274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.618320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.618331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.618347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.618358 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.721107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.721150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.721158 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.721173 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.721191 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.761091 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:26:58.803213773 +0000 UTC Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.782593 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:00 crc kubenswrapper[4739]: E0121 15:27:00.782740 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.825882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.825926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.825938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.825955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.825968 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.928392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.928452 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.928466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.928485 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.928498 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.031120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.031165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.031176 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.031193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.031206 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.133426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.133468 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.133478 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.133493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.133505 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.236710 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.236754 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.236763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.236779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.236794 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.339382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.339424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.339438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.339461 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.339474 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.441756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.441795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.441805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.441846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.441857 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.544945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.544982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.544993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.545010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.545021 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.648058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.648127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.648166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.648197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.648220 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.754846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.754936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.754948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.754980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.754993 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.762322 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:46:50.114942641 +0000 UTC Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.782108 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.782147 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.782215 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:01 crc kubenswrapper[4739]: E0121 15:27:01.782297 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:01 crc kubenswrapper[4739]: E0121 15:27:01.782379 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:01 crc kubenswrapper[4739]: E0121 15:27:01.782447 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.841004 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.856067 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.857922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.857970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.857984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.858004 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.858021 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.871252 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.884957 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.898742 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.911252 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.922465 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.937458 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.948754 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.960852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.960888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.960901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.960919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.960930 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.962133 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.972192 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.983509 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.997705 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.016570 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.030125 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.042987 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.057643 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.063369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.063411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.063419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.063452 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.063464 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.081132 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.093219 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.169419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.169463 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.169471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.169486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.169495 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.272655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.272701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.272713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.272730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.272741 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.376236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.376303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.376317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.376342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.376359 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.478801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.478871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.478883 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.478899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.478910 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.581792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.581873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.581886 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.581903 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.581915 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.685734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.685836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.685849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.685883 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.685896 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.762501 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:59:50.42736893 +0000 UTC Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.781876 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.782070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.782128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.782182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.782195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.782212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.782225 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.795547 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.800840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.800885 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.800897 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.800915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.800929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.813382 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.816719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.816767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.816781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.816798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.816827 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.830354 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.834547 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.834588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.834598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.834614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.834624 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.849126 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.853988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.854044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.854056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.854074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.854089 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.869586 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.869723 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.871582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.871612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.871622 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.871648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.871663 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.974537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.974585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.974594 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.974610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.974620 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.077302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.077361 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.077375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.077396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.077420 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.156337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:03 crc kubenswrapper[4739]: E0121 15:27:03.156506 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:03 crc kubenswrapper[4739]: E0121 15:27:03.156587 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:27:19.156568997 +0000 UTC m=+70.847275261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.180696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.180742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.180754 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.180771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.180783 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.283879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.283941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.283951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.283969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.283981 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.386160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.386197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.386207 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.386223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.386236 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.488443 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.488487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.488498 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.488513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.488525 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.590633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.590675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.590686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.590701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.590712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.693916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.693965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.693974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.693988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.693997 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.762871 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:00:04.178664739 +0000 UTC Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.782302 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.782344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:03 crc kubenswrapper[4739]: E0121 15:27:03.782483 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:03 crc kubenswrapper[4739]: E0121 15:27:03.782588 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.782379 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:03 crc kubenswrapper[4739]: E0121 15:27:03.782655 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.796489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.796523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.796531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.796547 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.796556 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.899109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.899146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.899160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.899178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.899188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.001984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.002025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.002035 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.002054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.002064 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.104030 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.104062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.104073 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.104091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.104102 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.206064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.206349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.206466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.206534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.206590 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.308439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.308767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.308841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.308936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.308996 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.411894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.411987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.412008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.412032 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.412061 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.515228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.515261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.515273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.515289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.515302 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.618394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.618652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.618737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.618806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.618909 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.721549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.721578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.721605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.721618 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.721627 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.764533 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:09:23.631367772 +0000 UTC Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.812686 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:04 crc kubenswrapper[4739]: E0121 15:27:04.812852 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.824385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.824423 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.824432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.824446 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.824455 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.926987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.927038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.927049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.927063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.927073 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.030121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.030204 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.030221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.030243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.030257 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.133107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.133537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.133619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.133703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.133779 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.236340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.236396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.236406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.236424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.236436 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.339298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.339373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.339396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.339429 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.339451 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.442164 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.442207 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.442221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.442238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.442250 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.544628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.544663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.544690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.544703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.544711 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.647373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.647406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.647413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.647426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.647436 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.684418 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.684881 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:27:37.684853667 +0000 UTC m=+89.375559971 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.749522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.749558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.749568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.749582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.749591 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.765158 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:42:35.402418843 +0000 UTC Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.782478 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.782663 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.782478 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.782494 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.782925 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.783092 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.786315 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.786370 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.786398 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.786427 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786460 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786561 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786576 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786589 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786576 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:37.786553286 +0000 UTC m=+89.477259600 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786628 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786645 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:37.786632728 +0000 UTC m=+89.477338992 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786662 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786678 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786630 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786735 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:37.7867161 +0000 UTC m=+89.477422424 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786793 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:37.786782362 +0000 UTC m=+89.477488706 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.853056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.853099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.853109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.853126 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.853135 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.955867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.955931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.955950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.955975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.955992 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.058984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.059038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.059053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.059071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.059086 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.161729 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.162051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.162170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.162236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.162291 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.265279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.265323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.265335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.265350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.265360 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.368067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.368110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.368123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.368139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.368149 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.470895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.470938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.470949 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.470966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.470978 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.573909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.573944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.573955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.573971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.573981 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.686230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.686311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.686365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.686394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.686411 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.765748 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:26:36.820689955 +0000 UTC Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.782165 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:06 crc kubenswrapper[4739]: E0121 15:27:06.782294 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.788406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.788455 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.788469 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.788487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.788503 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.890967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.891014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.891022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.891037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.891048 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.993950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.994034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.994049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.994069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.994081 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.096392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.096436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.096450 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.096467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.096478 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.198568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.198606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.198618 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.198635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.198646 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.300832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.300904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.300920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.300937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.300951 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.403666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.403720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.403736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.403761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.403778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.506805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.506858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.506868 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.506884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.506896 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.609722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.609762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.609780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.609794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.609803 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.712858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.712887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.712896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.712911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.712920 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.766408 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:14:02.772264615 +0000 UTC Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.781758 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:07 crc kubenswrapper[4739]: E0121 15:27:07.781933 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.782150 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:07 crc kubenswrapper[4739]: E0121 15:27:07.782219 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.782578 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:07 crc kubenswrapper[4739]: E0121 15:27:07.782635 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.814754 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.815000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.815015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.815033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.815045 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.917671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.917706 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.917714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.917727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.917736 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.021243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.021385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.021411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.021440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.021461 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.123687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.123775 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.123792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.123811 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.123860 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.227542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.227578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.227587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.227602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.227613 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.330571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.330640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.330652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.330671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.330684 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.433972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.434007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.434021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.434043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.434059 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.536593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.536634 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.536647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.536686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.536698 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.643950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.643996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.644006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.644021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.644031 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.746275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.746320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.746335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.746354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.746371 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.767141 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:25:47.449699797 +0000 UTC Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.782465 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:08 crc kubenswrapper[4739]: E0121 15:27:08.782893 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.795117 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.804502 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.816035 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.827018 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.844310 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.848109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.848154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.848162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.848177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.848187 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.859808 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.872679 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.885142 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.897526 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.917090 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.941773 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.950985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.951039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.951052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.951071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.951082 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.957077 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.968734 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.983315 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.996240 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.007339 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.024796 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.037280 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.054209 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.054499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.054564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.054634 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.054710 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.157495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.157542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.157575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.157591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.157601 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.260024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.260060 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.260072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.260093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.260108 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.362638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.362684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.362693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.362713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.362725 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.465402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.465446 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.465456 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.465474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.465484 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.568030 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.568068 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.568081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.568099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.568120 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.671892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.671953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.671971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.671996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.672051 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.767344 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:55:24.330102033 +0000 UTC Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.774562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.774913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.775016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.775099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.775191 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.782258 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.782289 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:09 crc kubenswrapper[4739]: E0121 15:27:09.782931 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.782966 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:09 crc kubenswrapper[4739]: E0121 15:27:09.783135 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:09 crc kubenswrapper[4739]: E0121 15:27:09.783028 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.783028 4739 scope.go:117] "RemoveContainer" containerID="7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.878335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.878864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.878876 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.878894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.878907 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.981713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.981757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.981768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.981784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.981795 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.084518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.084554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.084589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.084607 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.084618 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.187588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.187641 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.187652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.187668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.188019 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.290234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.290270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.290279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.290304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.290314 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.392640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.392674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.392682 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.392696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.392707 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.495341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.495376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.495384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.495397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.495406 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.598259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.598302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.598314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.598331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.598345 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.701376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.701435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.701447 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.701472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.701487 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.768107 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:44:04.874066561 +0000 UTC Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.782964 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:10 crc kubenswrapper[4739]: E0121 15:27:10.783095 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.804713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.804747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.804755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.804768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.804776 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.907138 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.907175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.907184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.907198 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.907207 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.010252 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.010310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.010322 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.010341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.010353 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.113980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.114031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.114040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.114057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.114066 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.216619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.216661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.216668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.216686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.216698 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.249250 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/1.log" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.252785 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.253516 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.278350 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.293946 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.307701 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.323212 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.324984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.325339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.325359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.325381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.325429 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.336349 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.355541 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.373593 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.387317 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.402600 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.416158 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428241 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428744 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.453159 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.465562 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.480025 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.494373 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.509710 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.522752 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.530928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.530966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.530975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.531008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.531017 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.537612 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.636774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.636855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.636870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.636892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.636905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.740142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.740186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.740200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.740223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.740239 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.769231 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:04:50.274866029 +0000 UTC Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.782233 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.782284 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.782241 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:11 crc kubenswrapper[4739]: E0121 15:27:11.782416 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:11 crc kubenswrapper[4739]: E0121 15:27:11.782543 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:11 crc kubenswrapper[4739]: E0121 15:27:11.782622 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.843275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.843314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.843325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.843341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.843353 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.947318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.947376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.947390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.947412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.947432 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.051567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.051616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.051627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.051648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.051659 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.155007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.155039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.155051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.155066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.155077 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.260378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.260431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.260443 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.260462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.260476 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.263851 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/2.log" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.265179 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/1.log" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.267950 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" exitCode=1 Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.268099 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.268220 4739 scope.go:117] "RemoveContainer" containerID="7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.271169 4739 scope.go:117] "RemoveContainer" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" Jan 21 15:27:12 crc kubenswrapper[4739]: E0121 15:27:12.271789 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.290614 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.302835 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.321577 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.346488 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.361785 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.362945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.362971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.362979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.362994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.363003 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.378186 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.391164 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.406522 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.420588 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.434574 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.451138 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.466232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.466521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.466636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.466731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.466853 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.467259 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.483414 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.497804 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.514170 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.533623 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.544327 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.556485 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.569500 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.569583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.569599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.569617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.569645 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.672395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.672424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.672432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.672444 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.672452 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.769804 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:53:15.895995962 +0000 UTC Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.774755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.774859 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.774909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.774941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.774987 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.782189 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:12 crc kubenswrapper[4739]: E0121 15:27:12.782449 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.878508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.879059 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.879075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.879104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.879124 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.982645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.982677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.982688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.982702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.982712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.085412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.085458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.085471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.085492 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.085505 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.187917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.188006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.188024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.188048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.188061 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.233116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.233437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.233528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.233613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.233687 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.247843 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.252482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.252529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.252540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.252560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.252573 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.265027 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.269487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.269663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.269737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.269804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.269917 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.281670 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/2.log" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.285557 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.288323 4739 scope.go:117] "RemoveContainer" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.288501 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.289843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.290712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.290723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.290741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.291013 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.299961 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.304978 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.309666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.309758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.309772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.309794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.309808 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.314049 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.326008 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.326125 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.328330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.328362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.328373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.328409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.328426 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.337455 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.350555 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.365863 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.379254 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.393403 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.408084 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.419657 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.431185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.431239 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.431317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.431340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.431352 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.433277 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.446308 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.461239 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.480910 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.494278 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.515518 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.530366 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.534083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.534110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.534121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.534138 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.534150 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.543738 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.556593 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.636524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.636568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.636581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.636598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.636609 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.739313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.739375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.739385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.739399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.739408 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.770176 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:04:01.681876774 +0000 UTC Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.782301 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.782365 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.782327 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.782450 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.782477 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.782543 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.841271 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.841310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.841323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.841340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.841351 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.944424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.944491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.944511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.944533 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.944549 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.046749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.046803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.046836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.046860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.046872 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.149337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.149426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.149445 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.149474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.149495 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.251968 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.251996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.252004 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.252016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.252026 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.354786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.354867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.354887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.354907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.354919 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.457119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.457441 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.457531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.457624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.457729 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.560495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.560537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.560546 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.560560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.560568 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.663965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.664260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.664343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.664460 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.664522 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.767437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.767479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.767491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.767508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.767521 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.770589 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:50:59.822900017 +0000 UTC Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.782135 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:14 crc kubenswrapper[4739]: E0121 15:27:14.782315 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.870092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.870146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.870162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.870178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.870188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.972277 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.972331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.972341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.972355 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.972365 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.074810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.074865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.074877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.074891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.074902 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.177459 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.177505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.177514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.177530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.177539 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.281010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.281062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.281085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.281112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.281132 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.384090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.384132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.384144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.384160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.384170 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.487011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.487051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.487064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.487119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.487134 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.589780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.589853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.589867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.589885 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.589897 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.693067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.693120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.693131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.693149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.693160 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.771304 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:26:49.328396796 +0000 UTC Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.782725 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.782756 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.782769 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:15 crc kubenswrapper[4739]: E0121 15:27:15.782946 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:15 crc kubenswrapper[4739]: E0121 15:27:15.783037 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:15 crc kubenswrapper[4739]: E0121 15:27:15.783150 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.795302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.795343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.795351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.795364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.795375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.897962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.898014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.898026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.898046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.898059 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.000251 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.000311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.000327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.000345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.000355 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.102431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.102463 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.102474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.102490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.102501 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.206008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.206043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.206054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.206070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.206096 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.307753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.307793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.307805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.307836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.307848 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.410791 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.410834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.410843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.410855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.410864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.513677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.513722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.513733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.513762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.513773 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.621333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.621379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.621391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.621409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.621421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.723558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.723585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.723593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.723607 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.723615 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.772480 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:00:02.409091854 +0000 UTC Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.784118 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:16 crc kubenswrapper[4739]: E0121 15:27:16.784240 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.826570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.826606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.826615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.826628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.826637 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.929259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.929296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.929305 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.929320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.929330 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.032163 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.032208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.032226 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.032250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.032266 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.135532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.135667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.135680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.135696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.135705 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.238510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.238561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.238573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.238590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.238603 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.340957 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.340985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.340992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.341005 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.341013 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.443562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.443612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.443627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.443646 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.443658 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.545966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.546242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.546336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.546481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.546563 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.648895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.648953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.648970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.648994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.649031 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.751739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.751801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.751833 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.751858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.751873 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.772952 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:10:02.932281542 +0000 UTC Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.782500 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.782669 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:17 crc kubenswrapper[4739]: E0121 15:27:17.782782 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.782801 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:17 crc kubenswrapper[4739]: E0121 15:27:17.782988 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:17 crc kubenswrapper[4739]: E0121 15:27:17.783082 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.854809 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.854872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.854882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.854904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.854944 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.957191 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.957229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.957238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.957254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.957265 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.059938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.059965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.059973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.059986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.059994 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.162153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.162199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.162211 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.162232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.162247 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.264913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.264964 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.264974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.264997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.265007 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.367793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.367860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.367876 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.367898 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.367914 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.469794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.469839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.469851 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.469870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.469881 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.572479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.572534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.572544 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.572559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.572570 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.675361 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.675403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.675417 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.675433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.675448 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.774047 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:04:02.496437124 +0000 UTC Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.778176 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.778232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.778245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.778261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.778271 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.782435 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:18 crc kubenswrapper[4739]: E0121 15:27:18.782585 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.802146 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.820852 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.833792 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.846724 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.857374 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.868331 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.877732 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.880227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.880266 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.880277 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.880294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.880304 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.889216 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.901935 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.919044 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.930674 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.943002 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.955421 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.966161 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.977702 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.982844 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.982891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.982900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.982914 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.982923 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.995197 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.007105 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.017539 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.085598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.085655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.085669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.085683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.085694 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.188288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.188340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.188349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.188364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.188375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.237272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:19 crc kubenswrapper[4739]: E0121 15:27:19.237492 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:19 crc kubenswrapper[4739]: E0121 15:27:19.237629 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:27:51.237587167 +0000 UTC m=+102.928293511 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.294773 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.294963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.294978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.295008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.295028 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.397920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.397965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.397977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.397995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.398010 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.500448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.500494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.500504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.500520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.500530 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.603262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.603310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.603323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.603341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.603354 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.706034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.706095 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.706116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.706140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.706159 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.774920 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:59:08.077225942 +0000 UTC Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.782384 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.782399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:19 crc kubenswrapper[4739]: E0121 15:27:19.782589 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:19 crc kubenswrapper[4739]: E0121 15:27:19.782712 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.782399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:19 crc kubenswrapper[4739]: E0121 15:27:19.782840 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.809465 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.809506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.809537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.809554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.809585 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.912023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.912067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.912075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.912088 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.912098 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.014588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.014639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.014654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.014673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.014685 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.116926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.116969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.116981 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.117000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.117013 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.219605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.219650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.219664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.219680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.219691 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.321944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.321974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.321983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.321996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.322006 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.424425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.424477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.424486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.424499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.424508 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.526727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.526766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.526777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.526790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.526799 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.629055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.629100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.629112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.629127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.629137 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.732961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.733007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.733023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.733044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.733055 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.775489 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:50:25.070527098 +0000 UTC Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.781955 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:20 crc kubenswrapper[4739]: E0121 15:27:20.782127 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.836026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.836092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.836116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.836145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.836168 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.938979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.939021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.939036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.939056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.939071 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.041337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.041385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.041396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.041410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.041421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.174731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.174778 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.174787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.174802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.174810 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.276875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.276927 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.276946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.276999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.277017 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.379262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.379311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.379323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.379342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.379354 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.481448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.481491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.481502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.481518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.481528 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.584321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.584347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.584354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.584366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.584375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.688354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.688383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.688393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.688406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.688415 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.775755 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:51:30.020920518 +0000 UTC Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.782052 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.782133 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:21 crc kubenswrapper[4739]: E0121 15:27:21.782203 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.782073 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:21 crc kubenswrapper[4739]: E0121 15:27:21.782357 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:21 crc kubenswrapper[4739]: E0121 15:27:21.782427 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.791038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.791089 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.791099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.791113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.791127 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.894179 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.894246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.894264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.894287 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.894306 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.996660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.996696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.996705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.996719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.996728 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.100640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.100695 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.100712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.100736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.100753 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.203401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.203444 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.203455 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.203472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.203483 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.306223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.306286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.306298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.306315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.306324 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.408929 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.408979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.409048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.409063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.409100 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.512003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.512049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.512063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.512083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.512103 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.614954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.615017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.615028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.615043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.615054 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.717483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.717514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.717523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.717536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.717546 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.776415 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:30:09.458655616 +0000 UTC Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.782780 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:22 crc kubenswrapper[4739]: E0121 15:27:22.783103 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.819722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.819761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.819774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.819790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.819801 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.921731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.921774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.921784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.921799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.921809 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.024144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.024184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.024194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.024210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.024220 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.126536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.126586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.126601 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.126625 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.126641 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.229446 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.229488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.229496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.229510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.229519 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.322641 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/0.log" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.322687 4739 generic.go:334] "Generic (PLEG): container finished" podID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" containerID="851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005" exitCode=1 Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.322715 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerDied","Data":"851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.323079 4739 scope.go:117] "RemoveContainer" containerID="851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347249 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347794 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.361347 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.388271 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.400160 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.413080 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.422846 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.433908 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.448833 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.452142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.452171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.452180 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.452193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.452202 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.462277 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.474886 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.485078 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.501962 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.523419 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.534697 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.554703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.554728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.554737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.554749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.554757 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.555324 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.566683 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.577544 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.590786 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.629481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.629527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.629537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.629551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.629561 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.641576 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.645521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.645561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.645571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.645587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.645608 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.657461 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.660769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.660796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.660804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.660832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.660843 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.672450 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.675565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.675608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.675617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.675631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.675640 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.687505 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.690389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.690514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.690597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.690673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.690738 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.701890 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.702018 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.703483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.703533 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.703545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.703561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.703572 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.777114 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:33:52.667444134 +0000 UTC Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.782422 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.782548 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.782847 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.782956 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.783103 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.783171 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.807072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.807104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.807116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.807133 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.807144 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.909689 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.909749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.909759 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.909774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.909784 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.012730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.012792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.012802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.012831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.012842 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.114837 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.114874 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.114886 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.114902 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.114912 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.217802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.217865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.217909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.217928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.217940 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.319970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.320017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.320030 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.320064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.320081 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.326046 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/0.log" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.326095 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerStarted","Data":"a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.340111 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.356854 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.372058 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.387221 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.400386 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.412709 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.427466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.427502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.427511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.427525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.427534 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.431028 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.444839 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.454763 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.464722 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.473587 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.483434 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.495718 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.518389 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.526553 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.529604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.529643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.529655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.529674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.529685 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.538888 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.549048 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.558576 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.632164 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.632193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.632201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.632214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.632223 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.735018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.735090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.735114 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.735142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.735159 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.778133 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:03:21.243868929 +0000 UTC Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.782593 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:24 crc kubenswrapper[4739]: E0121 15:27:24.782772 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.837630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.837707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.837729 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.837758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.837780 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.940604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.940643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.940654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.940669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.940680 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.042779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.042862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.042879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.042895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.042905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.145559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.145639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.145649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.145663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.145674 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.247481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.247530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.247545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.247561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.247573 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.349725 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.349787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.349804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.349855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.349874 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.467481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.467541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.467566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.467588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.467604 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.569955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.570008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.570020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.570035 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.570045 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.672335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.672392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.672408 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.672430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.672447 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.775230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.775303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.775325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.775357 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.775383 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.778372 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:00:36.388659701 +0000 UTC Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.782878 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.782938 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:25 crc kubenswrapper[4739]: E0121 15:27:25.783076 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.783124 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:25 crc kubenswrapper[4739]: E0121 15:27:25.783260 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:25 crc kubenswrapper[4739]: E0121 15:27:25.783894 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.784272 4739 scope.go:117] "RemoveContainer" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" Jan 21 15:27:25 crc kubenswrapper[4739]: E0121 15:27:25.784536 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.877791 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.877918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.877942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.877966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.877984 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.981130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.981199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.981219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.981292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.981322 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.083491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.083532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.083545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.083559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.083570 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.189855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.190195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.190431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.190498 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.190513 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.293650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.293682 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.293693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.293708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.293719 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.397016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.397058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.397070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.397085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.397096 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.499339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.499372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.499382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.499399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.499409 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.601709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.601751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.601763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.601779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.601798 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.704398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.704441 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.704451 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.704466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.704478 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.779376 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:51:17.578265641 +0000 UTC Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.782765 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:26 crc kubenswrapper[4739]: E0121 15:27:26.782989 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.807190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.807243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.807259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.807275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.807285 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.909483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.909517 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.909525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.909537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.909546 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.011961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.011989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.011997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.012012 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.012023 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.113724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.113770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.113782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.113806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.113844 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.215701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.215742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.215753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.215770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.215782 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.318378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.318413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.318421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.318434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.318442 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.420648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.420685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.420695 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.420711 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.420721 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.522431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.522471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.522482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.522497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.522508 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.625185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.625237 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.625246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.625261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.625271 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.728351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.728406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.728430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.728452 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.728470 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.780353 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:28:09.947159876 +0000 UTC Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.782742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.782884 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.782742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:27 crc kubenswrapper[4739]: E0121 15:27:27.782919 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:27 crc kubenswrapper[4739]: E0121 15:27:27.783008 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:27 crc kubenswrapper[4739]: E0121 15:27:27.783284 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.831737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.831784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.831799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.831848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.831864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.934746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.934857 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.934880 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.934904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.934923 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.037257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.037299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.037310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.037325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.037335 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.139791 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.139849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.139860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.139872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.139881 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.242056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.242093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.242102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.242117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.242127 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.343935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.343975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.343998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.344013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.344022 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.446412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.446448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.446456 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.446501 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.446527 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.553214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.553264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.553272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.553284 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.553298 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.655221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.655265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.655275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.655288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.655296 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.758192 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.758264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.758286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.758322 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.758359 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.780720 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:55:12.039863341 +0000 UTC Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.782019 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:28 crc kubenswrapper[4739]: E0121 15:27:28.783084 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.794581 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.804882 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.815508 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.827314 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.838432 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.849136 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.858445 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.863002 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.863035 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.863072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.863094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.863105 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.872065 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.891740 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.902190 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.918828 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.929387 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.939831 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.952207 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.963576 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.965262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.965293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.965303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.965318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.965330 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.977196 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.992000 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.001765 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.067715 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.067754 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.067762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.067802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.067836 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.170732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.170856 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.170874 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.170900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.171689 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.275221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.275273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.275289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.275310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.275327 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.378066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.378129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.378153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.378182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.378205 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.481104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.481154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.481169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.481193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.481208 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.583580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.583646 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.583671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.583737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.583764 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.688106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.688166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.688182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.688203 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.688225 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.781885 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:23:21.666568602 +0000 UTC Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.783134 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.783196 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.783142 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:29 crc kubenswrapper[4739]: E0121 15:27:29.783350 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:29 crc kubenswrapper[4739]: E0121 15:27:29.783546 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:29 crc kubenswrapper[4739]: E0121 15:27:29.783708 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.791241 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.791312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.791326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.791345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.791391 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.894633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.894697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.894714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.894739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.894757 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.998316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.998377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.998403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.998433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.998457 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.100342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.100374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.100382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.100394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.100401 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.202764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.203059 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.203151 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.203270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.203380 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.305606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.305662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.305671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.305683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.305691 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.408061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.408108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.408123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.408141 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.408153 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.511076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.511116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.511124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.511139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.511147 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.614503 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.614617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.614636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.614658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.614673 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.717053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.717122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.717146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.717173 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.717194 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.781953 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:30 crc kubenswrapper[4739]: E0121 15:27:30.782146 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.782268 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:22:36.268529372 +0000 UTC Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.820047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.820115 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.820139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.820183 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.820206 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.922273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.922306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.922314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.922326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.922334 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.025046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.025121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.025144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.025177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.025197 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.127692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.127727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.127735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.127747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.127756 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.238679 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.238871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.238907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.238938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.238962 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.341876 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.341909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.341919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.341933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.341943 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.445373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.445450 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.445477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.445510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.445630 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.548090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.548149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.548163 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.548181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.548194 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.651690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.651768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.651794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.651871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.651903 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.755356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.755432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.755449 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.755473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.755491 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.781898 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.781938 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:31 crc kubenswrapper[4739]: E0121 15:27:31.782058 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.782100 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:31 crc kubenswrapper[4739]: E0121 15:27:31.782315 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:31 crc kubenswrapper[4739]: E0121 15:27:31.782396 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.782458 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:01:52.542841969 +0000 UTC Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.857949 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.858009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.858020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.858043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.858056 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.961024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.961070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.961100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.961118 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.961127 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.063934 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.063987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.063999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.064018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.064032 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.166953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.166990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.166999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.167011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.167020 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.269750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.269793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.269802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.269835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.269844 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.372961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.373015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.373049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.373076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.373086 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.476473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.476539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.476560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.476588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.476609 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.578774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.579162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.579294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.579431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.579545 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.683306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.683351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.683364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.683382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.683396 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.782128 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:32 crc kubenswrapper[4739]: E0121 15:27:32.782279 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.783079 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:08:16.913434431 +0000 UTC Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.785645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.785692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.785767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.785784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.785809 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.888611 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.888651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.888663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.888677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.888688 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.992210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.992265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.992281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.992299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.992311 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.094757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.094807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.094849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.094865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.094877 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.196810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.196973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.197070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.197104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.197117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.299888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.299946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.299963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.299983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.300001 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.402922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.402991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.403015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.403053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.403079 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.505988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.506037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.506050 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.506073 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.506087 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.608507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.608557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.608569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.608587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.608602 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.711902 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.711990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.712028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.712058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.712093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.781863 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.782006 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.782216 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.782408 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.782216 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.782750 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.783274 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:49:56.616187987 +0000 UTC Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.805648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.805708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.805728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.805757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.805781 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.824724 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.831244 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.831297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.831308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.831333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.831346 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.845930 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.850657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.850720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.850734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.850752 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.850764 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.865361 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.869628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.869667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.869677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.869697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.869710 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.884770 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.890325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.890384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.890397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.890416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.890871 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.906865 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.907001 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.908650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.908688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.908704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.908725 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.908740 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.012051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.012100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.012110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.012127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.012152 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.116311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.116353 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.116379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.116397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.116406 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.219080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.219125 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.219139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.219161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.219176 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.323296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.323344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.323357 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.323374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.323387 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.425920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.426212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.426303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.426387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.426477 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.531161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.531221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.531245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.531273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.531295 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.635779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.635899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.635926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.635953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.635968 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.738717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.738778 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.738790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.738836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.738851 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.781925 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:34 crc kubenswrapper[4739]: E0121 15:27:34.782102 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.783875 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:24:33.756202118 +0000 UTC Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.842215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.842474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.842627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.842767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.842916 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.945744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.945840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.945858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.945879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.945891 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.049028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.049085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.049103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.049126 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.049145 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.152533 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.152594 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.152607 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.152638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.152656 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.255242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.255309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.255325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.255353 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.255374 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.360165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.360232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.360249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.360273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.360285 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.463803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.463962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.463978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.464000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.464016 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.566628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.566657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.566666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.566680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.566689 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.669361 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.669437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.669460 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.669496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.669523 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.771945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.772023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.772042 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.772069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.772088 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.782224 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.782328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:35 crc kubenswrapper[4739]: E0121 15:27:35.782510 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.782285 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:35 crc kubenswrapper[4739]: E0121 15:27:35.782997 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:35 crc kubenswrapper[4739]: E0121 15:27:35.783097 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.784271 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:16:42.441299959 +0000 UTC Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.875687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.875741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.875757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.875779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.875797 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.978961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.979026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.979048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.979075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.979096 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.082205 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.082257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.082269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.082287 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.082303 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.185864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.185932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.185958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.185996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.186022 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.289658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.289767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.289787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.289840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.289865 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.393686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.393771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.393797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.393904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.393938 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.496857 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.496906 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.496920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.496943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.496961 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.601388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.601458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.601483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.601511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.601534 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.705343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.705400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.705427 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.705457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.705479 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.783941 4739 scope.go:117] "RemoveContainer" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.784347 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:24:44.143402909 +0000 UTC Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.787159 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:36 crc kubenswrapper[4739]: E0121 15:27:36.788070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.802541 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.807756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.807809 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.807848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.807866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.807878 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.911289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.911357 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.911371 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.911385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.911395 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.013893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.013953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.013967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.013989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.014018 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.115959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.116007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.116022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.116044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.116062 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.218365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.218423 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.218440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.218465 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.218482 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.321127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.321188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.321199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.321218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.321232 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.424257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.424308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.424325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.424345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.424360 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.527384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.527432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.527448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.527471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.527487 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.630614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.630656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.630667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.630683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.630694 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733001 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733348 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.733505 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.733108691 +0000 UTC m=+153.423814955 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.782093 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.782167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.782243 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.782203 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.782399 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.782594 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.785295 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:19:21.055482643 +0000 UTC Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.836678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.836767 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.836875 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.836988 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.836804 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837068 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.836960402 +0000 UTC m=+153.527666756 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837203 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837219 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837229 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837300 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837384 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.837331511 +0000 UTC m=+153.528037775 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837401 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.837393803 +0000 UTC m=+153.528100067 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837591 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837646 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837663 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837734 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.837709981 +0000 UTC m=+153.528416265 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837762 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.940315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.940366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.940376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.940396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.940408 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.042761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.042807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.042842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.042862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.042875 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.146245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.146298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.146309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.146328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.146349 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.248900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.248933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.248943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.248958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.248968 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.351195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.351232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.351240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.351256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.351266 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.373571 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/2.log" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.376201 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.376806 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.392810 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.409809 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.426596 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.447273 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.460112 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.485727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.485757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.485780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.485794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.485803 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.506622 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.525428 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.546565 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.558504 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.570651 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.588296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.588332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.588343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.588358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.588370 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.593346 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.605546 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.618338 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.631755 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.642619 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.654421 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.667404 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.682034 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.690765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.691529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.691577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.691598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.691610 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.699431 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.782743 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:38 crc kubenswrapper[4739]: E0121 15:27:38.782879 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.786069 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:46:58.304343949 +0000 UTC Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.795639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.795673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.795684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.795703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.795756 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.798699 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.822596 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.835602 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.849198 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.863097 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.876885 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.894418 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.898120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.898166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.898175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.898192 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.898203 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.918346 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.930682 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.946011 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.959872 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.972479 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.984767 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.000458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.000507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.000519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.000537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.000549 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.004921 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.016623 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.032636 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.050900 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.064204 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.079135 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.101969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.102002 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.102016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.102031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.102042 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.205246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.205301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.205316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.205332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.205342 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.307047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.307387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.307498 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.307575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.307646 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.381440 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/3.log" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.382084 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/2.log" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.384090 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" exitCode=1 Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.384182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.384292 4739 scope.go:117] "RemoveContainer" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.384938 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:27:39 crc kubenswrapper[4739]: E0121 15:27:39.385187 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.410178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.410224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.410233 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.410249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.410261 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.411507 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:39Z\\\",\\\"message\\\":\\\"er 4 for removal\\\\nI0121 15:27:38.925943 6741 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:27:38.925954 6741 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:27:38.925966 6741 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:27:38.926016 6741 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:27:38.926030 6741 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:27:38.926037 6741 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:27:38.926546 6741 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:27:38.926569 6741 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 15:27:38.926587 6741 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:27:38.926593 6741 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:27:38.926600 6741 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 15:27:38.926615 6741 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:27:38.926618 6741 factory.go:656] Stopping watch factory\\\\nI0121 15:27:38.926628 6741 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:27:38.926629 6741 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.425297 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.450430 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.467153 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.480548 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.492957 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.508559 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.513227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.513509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.513600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.513663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.513763 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.527567 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.539602 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.552784 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.566791 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.580770 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.596457 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.613205 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.616415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.616567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.616624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.616712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.616769 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.627910 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.637658 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.649731 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.660711 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.671290 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.719654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.719704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.719718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.719737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.719749 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.782116 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.782186 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:39 crc kubenswrapper[4739]: E0121 15:27:39.782256 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:39 crc kubenswrapper[4739]: E0121 15:27:39.782330 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.782397 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:39 crc kubenswrapper[4739]: E0121 15:27:39.782457 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.787180 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:11:12.73800003 +0000 UTC Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.821728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.821766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.821777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.821793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.821803 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.924150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.924205 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.924219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.924240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.924255 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.026942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.027334 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.027575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.027786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.028043 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.130692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.130977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.131051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.131121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.131191 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.234391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.234491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.234509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.234533 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.234550 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.336950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.337039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.337052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.337067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.337078 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.389661 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/3.log" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.395345 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:27:40 crc kubenswrapper[4739]: E0121 15:27:40.395667 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.430537 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.440398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.440430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.440440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.440453 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.440464 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.446837 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.462663 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.476685 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.491377 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.504768 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.519448 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.528925 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.539063 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.542384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.542421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.542434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.542450 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.542463 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.550465 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.561307 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.572829 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.584909 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.603593 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:39Z\\\",\\\"message\\\":\\\"er 4 for removal\\\\nI0121 15:27:38.925943 6741 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:27:38.925954 6741 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:27:38.925966 6741 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:27:38.926016 6741 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:27:38.926030 6741 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:27:38.926037 6741 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:27:38.926546 6741 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:27:38.926569 6741 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 15:27:38.926587 6741 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:27:38.926593 6741 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:27:38.926600 6741 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 15:27:38.926615 6741 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:27:38.926618 6741 factory.go:656] Stopping watch factory\\\\nI0121 15:27:38.926628 6741 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:27:38.926629 6741 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.614874 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.630028 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.644762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.644793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.644801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.644814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.644843 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.645918 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.657630 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.668152 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.747101 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.747139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.747148 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.747161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.747171 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.782054 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:40 crc kubenswrapper[4739]: E0121 15:27:40.782227 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.787389 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 18:40:05.52964058 +0000 UTC Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.851360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.851409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.851422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.851439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.851452 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.954285 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.954332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.954345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.954364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.954375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.056951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.056990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.057000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.057014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.057023 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.159523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.159576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.159593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.159617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.159634 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.262715 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.262757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.262765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.262779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.262791 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.365160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.365199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.365213 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.365230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.365242 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.467537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.467583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.467592 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.467607 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.467616 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.570786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.570869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.570883 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.570901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.570913 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.673748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.673798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.673807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.673841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.673851 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.777390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.777457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.777472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.777495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.777509 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.782433 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.782478 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.782442 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:41 crc kubenswrapper[4739]: E0121 15:27:41.782681 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:41 crc kubenswrapper[4739]: E0121 15:27:41.782802 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:41 crc kubenswrapper[4739]: E0121 15:27:41.782938 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.788520 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:02:01.842409268 +0000 UTC Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.880923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.881001 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.881016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.881040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.881058 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.983251 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.983284 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.983294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.983307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.983317 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.085673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.085736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.085747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.085762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.085772 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.188227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.188267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.188278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.188291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.188300 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.291870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.292567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.292657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.292685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.292699 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.395891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.395941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.395967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.395990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.396004 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.498652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.498774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.498789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.498806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.498842 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.600851 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.600947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.600999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.601020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.601059 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.702776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.702804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.702831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.702848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.702857 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.789423 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:53:18.565693742 +0000 UTC Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.805486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.805528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.805537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.805558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.805574 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.907865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.907911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.907924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.907940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.907955 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.010860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.010913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.010927 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.010945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.010956 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.113267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.113328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.113344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.113363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.113376 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.215808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.215928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.216160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.216210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.216226 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.319066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.319098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.319108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.319121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.319131 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.421728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.421758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.421769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.421783 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.421793 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.530122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.530307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.530339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.530421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.530450 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.591577 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:43 crc kubenswrapper[4739]: E0121 15:27:43.591746 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.634905 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.634961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.634978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.635002 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.635016 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.739545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.739972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.739983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.740000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.740015 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.789874 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:24:54.597981791 +0000 UTC Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.842228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.842267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.842275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.842293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.842310 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.945206 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.945242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.945253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.945291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.945302 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.048084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.048127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.048136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.048150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.048168 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.151792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.151849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.151861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.151878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.151890 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.182804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.182871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.182885 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.182977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.182988 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.195495 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.199491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.199563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.199580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.199605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.199622 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.218397 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.222571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.222628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.222642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.222659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.222670 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.234368 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.237626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.237662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.237671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.237686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.237696 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.250643 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.263756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.263810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.263843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.263864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.263885 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.278844 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.278950 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.280882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.280923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.280944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.280962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.280974 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.384413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.384475 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.384494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.384519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.384536 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.486970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.487284 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.487358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.487428 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.487515 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.591307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.591346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.591356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.591370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.591379 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.593974 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.594187 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.594241 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.594270 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.594323 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.594637 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.693582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.693657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.693666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.693686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.693700 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.790891 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:11:03.047723483 +0000 UTC Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.795473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.795510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.795521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.795539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.795553 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.897810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.897886 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.897896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.897913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.897925 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.000726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.001024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.001103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.001168 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.001315 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.103351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.103387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.103398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.103413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.103426 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.206386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.206471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.206494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.206523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.206545 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.309438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.309511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.309619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.309657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.309729 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.413539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.413596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.413608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.413626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.413641 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.517428 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.517486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.517497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.517519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.517535 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.621280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.621352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.621373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.621402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.621424 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.723930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.723970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.724004 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.724020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.724030 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.782128 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.782176 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.782165 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:45 crc kubenswrapper[4739]: E0121 15:27:45.782405 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:45 crc kubenswrapper[4739]: E0121 15:27:45.782681 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:45 crc kubenswrapper[4739]: E0121 15:27:45.782718 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.791411 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:36:03.044320467 +0000 UTC Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.828543 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.828593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.828604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.828619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.828628 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.931651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.931680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.931690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.931705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.931718 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.034732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.034776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.034788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.034806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.034837 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.136640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.136670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.136679 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.136694 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.136702 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.238372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.238403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.238412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.238425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.238435 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.340963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.341031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.341049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.341076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.341093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.443552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.443585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.443594 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.443608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.443616 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.546854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.546903 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.546914 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.546930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.546940 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.650197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.650253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.650270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.650292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.650308 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.753683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.754317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.754518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.754716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.755176 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.781944 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:46 crc kubenswrapper[4739]: E0121 15:27:46.782161 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.791719 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:09:48.623705373 +0000 UTC Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.858597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.858639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.858650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.858665 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.858683 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.961636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.961669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.961678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.961692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.961702 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.064741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.064803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.064858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.064889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.064908 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.167278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.167337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.167354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.167377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.167394 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.270085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.270128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.270137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.270152 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.270165 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.373244 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.373294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.373302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.373317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.373327 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.475955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.476019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.476043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.476073 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.476097 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.578107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.578155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.578172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.578194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.578211 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.681255 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.681318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.681331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.681352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.681366 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.782265 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:47 crc kubenswrapper[4739]: E0121 15:27:47.782763 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.782999 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:47 crc kubenswrapper[4739]: E0121 15:27:47.783070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.783216 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:47 crc kubenswrapper[4739]: E0121 15:27:47.783377 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.784358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.784386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.784394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.784407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.784417 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.792639 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:06:22.558285442 +0000 UTC Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.887930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.888005 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.888027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.888056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.888077 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.990698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.990737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.990748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.990764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.990775 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.093394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.093656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.093931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.094212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.094436 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.197584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.197633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.197647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.197667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.197681 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.300539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.300588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.300604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.300626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.300643 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.403155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.403189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.403200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.403214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.403224 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.506307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.506350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.506362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.506381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.506394 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.608195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.608226 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.608234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.608245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.608255 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.710590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.710666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.710696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.710724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.710744 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.782594 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:48 crc kubenswrapper[4739]: E0121 15:27:48.782907 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.793120 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:47:43.680851338 +0000 UTC Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.813738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.813772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.813780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.813795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.813805 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.819770 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.838359 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.854988 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.871557 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.883942 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.896423 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.914095 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.917074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.917254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.917354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.917453 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.917539 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.925646 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.934164 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.944633 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.957871 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.974513 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.988277 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.007893 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:39Z\\\",\\\"message\\\":\\\"er 4 for removal\\\\nI0121 15:27:38.925943 6741 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:27:38.925954 6741 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:27:38.925966 6741 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:27:38.926016 6741 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:27:38.926030 6741 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:27:38.926037 6741 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:27:38.926546 6741 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:27:38.926569 6741 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 15:27:38.926587 6741 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:27:38.926593 6741 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:27:38.926600 6741 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 15:27:38.926615 6741 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:27:38.926618 6741 factory.go:656] Stopping watch factory\\\\nI0121 15:27:38.926628 6741 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:27:38.926629 6741 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020667 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020920 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.034690 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.049292 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.064778 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.075666 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.122999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.123041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.123050 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.123066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.123079 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.226008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.226057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.226071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.226092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.226107 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.329047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.329089 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.329099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.329115 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.329125 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.431362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.431396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.431439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.431454 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.431463 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.534102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.534407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.534550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.534738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.534898 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.637077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.637109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.637119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.637134 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.637143 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.740359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.740409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.740425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.740450 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.740467 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.782512 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.782573 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.782647 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:49 crc kubenswrapper[4739]: E0121 15:27:49.782837 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:49 crc kubenswrapper[4739]: E0121 15:27:49.782976 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:49 crc kubenswrapper[4739]: E0121 15:27:49.783056 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.793766 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:26:58.84938186 +0000 UTC Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.843429 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.843490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.843516 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.843542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.843563 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.946301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.946627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.946645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.946670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.946691 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.048485 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.048559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.048576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.048601 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.048618 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.150613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.150649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.150657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.150670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.150679 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.253366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.253391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.253403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.253703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.253725 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.357866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.357915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.357924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.357937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.357947 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.460424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.460471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.460482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.460496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.460506 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.563170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.563204 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.563214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.563231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.563242 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.665788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.665886 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.665913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.665942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.665966 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.768562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.768605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.768621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.768639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.768653 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.782241 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:50 crc kubenswrapper[4739]: E0121 15:27:50.782410 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.793918 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:19:26.281483214 +0000 UTC Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.871699 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.871756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.871766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.871781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.871794 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.974076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.974116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.974127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.974143 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.974154 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.077262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.077333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.077362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.077386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.077398 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.179558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.179624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.179664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.179696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.179718 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.281596 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:51 crc kubenswrapper[4739]: E0121 15:27:51.281768 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:51 crc kubenswrapper[4739]: E0121 15:27:51.281839 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:55.281799448 +0000 UTC m=+166.972505712 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.282186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.282222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.282231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.282243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.282252 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.384047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.384078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.384093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.384110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.384122 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.486119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.486149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.486159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.486199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.486212 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.589403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.589447 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.589458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.589475 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.589485 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.692318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.692390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.692403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.692418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.692430 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.782776 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.782873 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:51 crc kubenswrapper[4739]: E0121 15:27:51.782922 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.782873 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:51 crc kubenswrapper[4739]: E0121 15:27:51.783025 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:51 crc kubenswrapper[4739]: E0121 15:27:51.783130 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.794539 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 21:46:57.636115978 +0000 UTC Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.795132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.795241 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.795313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.795387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.795453 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.899020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.899069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.899084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.899108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.899126 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.001313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.001344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.001353 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.001365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.001374 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.104107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.104171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.104181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.104196 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.104209 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.207180 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.207538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.207711 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.208042 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.208260 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.310780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.311092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.311193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.311259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.311322 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.414240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.414282 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.414292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.414308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.414319 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.516398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.516438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.516515 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.516601 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.516615 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.619528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.619568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.619580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.619595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.619606 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.722091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.722385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.722557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.722712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.722869 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.783250 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.783616 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:27:52 crc kubenswrapper[4739]: E0121 15:27:52.783671 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:52 crc kubenswrapper[4739]: E0121 15:27:52.783862 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.794806 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:41:23.587831855 +0000 UTC Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.826320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.826347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.826356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.826368 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.826377 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.928637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.928697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.928712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.928728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.928738 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.031586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.031633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.031643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.031660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.031672 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.133536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.133582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.133593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.133609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.133621 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.236417 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.236468 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.236479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.236496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.236506 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.339515 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.339562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.339571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.339587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.339597 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.442358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.442421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.442436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.442460 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.442473 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.545482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.545636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.545660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.545683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.545699 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.648387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.648673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.648748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.648851 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.648931 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.752304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.752374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.752392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.752424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.752443 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.782780 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.782846 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.782871 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:53 crc kubenswrapper[4739]: E0121 15:27:53.783597 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:53 crc kubenswrapper[4739]: E0121 15:27:53.783667 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:53 crc kubenswrapper[4739]: E0121 15:27:53.783728 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.795869 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:51:14.550556481 +0000 UTC Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.854893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.854928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.854937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.854950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.854959 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.956894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.956943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.956955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.956971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.956983 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.059303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.059352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.059362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.059376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.059386 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.161278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.161332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.161343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.161359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.161370 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.263976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.264019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.264031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.264049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.264060 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.367299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.367333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.367342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.367360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.367370 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.463542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.463584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.463595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.463609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.463618 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.481042 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.485127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.485159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.485172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.485187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.485200 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.503022 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.507262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.507300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.507309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.507324 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.507333 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.520503 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.524265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.524298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.524309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.524326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.524337 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.536871 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.539878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.539910 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.539922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.539937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.539947 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.555406 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.555544 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.557011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.557036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.557046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.557058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.557067 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.659066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.659116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.659131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.659150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.659161 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.761953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.761996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.762006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.762021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.762032 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.782379 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.782493 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.796370 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:54:58.22305436 +0000 UTC Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.869647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.869686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.869698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.869713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.869726 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.972635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.973128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.973364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.973552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.973728 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.076988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.077017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.077024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.077037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.077044 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.178651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.178683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.178692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.178705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.178715 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.282304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.282401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.282557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.282588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.282613 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.386509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.386585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.386609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.386641 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.386663 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.489694 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.489971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.490043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.490159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.490233 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.592771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.593082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.593177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.593256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.593411 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.695236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.695267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.695277 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.695293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.695304 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.782536 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.782583 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.782639 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:55 crc kubenswrapper[4739]: E0121 15:27:55.782695 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:55 crc kubenswrapper[4739]: E0121 15:27:55.782795 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:55 crc kubenswrapper[4739]: E0121 15:27:55.782894 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797021 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:38:58.133869633 +0000 UTC Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797940 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.901016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.901476 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.901551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.901649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.901716 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.004396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.004669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.004734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.004845 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.004913 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.107327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.107884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.108156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.108346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.108523 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.211864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.211909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.211920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.211940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.211952 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.314346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.314389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.314402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.314420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.314432 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.417553 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.417610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.417630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.417659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.417678 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.520261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.520332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.520367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.520398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.520420 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.623230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.623278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.623288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.623301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.623311 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.726427 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.726485 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.726498 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.726513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.726523 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.782254 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:56 crc kubenswrapper[4739]: E0121 15:27:56.782554 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.798466 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:35:40.211891464 +0000 UTC Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.830159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.830586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.830644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.830683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.830854 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.933524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.933591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.933603 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.933621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.933634 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.036459 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.036500 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.036509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.036522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.036534 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.139806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.140109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.140181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.140269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.140349 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.243028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.243084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.243093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.243110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.243120 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.345937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.345973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.345983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.345997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.346008 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.448508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.448566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.448583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.448616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.448634 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.551023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.551078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.551087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.551105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.551116 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.654299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.654342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.654350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.654365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.654375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.756787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.756892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.756905 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.756923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.756936 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.782317 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.782374 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.782339 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:57 crc kubenswrapper[4739]: E0121 15:27:57.782551 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:57 crc kubenswrapper[4739]: E0121 15:27:57.782628 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:57 crc kubenswrapper[4739]: E0121 15:27:57.782749 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.798640 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:22:28.582289034 +0000 UTC Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.859762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.859792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.859840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.859852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.859860 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.963018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.963072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.963083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.963100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.963111 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.065385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.065423 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.065451 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.065467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.065476 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.168550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.168598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.168613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.168637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.168664 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.271892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.271938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.271947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.271961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.271975 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.374653 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.374733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.374742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.374757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.374766 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.477231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.477510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.477624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.477737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.477899 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.580600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.580637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.580645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.580660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.580671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.683753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.683800 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.683831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.683854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.683873 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.782149 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:58 crc kubenswrapper[4739]: E0121 15:27:58.783092 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.787462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.787499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.787507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.787520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.787531 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.799265 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:19:24.926051867 +0000 UTC Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.804333 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.819726 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.831226 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.842944 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.856020 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.867529 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.881596 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.891197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.891236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.891248 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.891267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.891280 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.894303 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.905744 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.919445 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.996662 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.999944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.999980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:58.999991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.000010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.000021 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.047638 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" podStartSLOduration=86.04761605 podStartE2EDuration="1m26.04761605s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:27:59.020934701 +0000 UTC m=+110.711640965" watchObservedRunningTime="2026-01-21 15:27:59.04761605 +0000 UTC m=+110.738322314" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.068738 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.068714444 podStartE2EDuration="1m27.068714444s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:27:59.067486943 +0000 UTC m=+110.758193207" watchObservedRunningTime="2026-01-21 15:27:59.068714444 +0000 UTC m=+110.759420708" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.068908 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8zn2s" podStartSLOduration=87.068903129 podStartE2EDuration="1m27.068903129s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:27:59.048152913 +0000 UTC m=+110.738859187" watchObservedRunningTime="2026-01-21 15:27:59.068903129 +0000 UTC m=+110.759609393" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.102727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.102766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.102776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.102789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.102798 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.173489 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mqkjd" podStartSLOduration=87.173461329 podStartE2EDuration="1m27.173461329s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:27:59.170574434 +0000 UTC m=+110.861280718" watchObservedRunningTime="2026-01-21 15:27:59.173461329 +0000 UTC m=+110.864167593" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.173810 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podStartSLOduration=87.173805237 podStartE2EDuration="1m27.173805237s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:27:59.151385498 +0000 UTC m=+110.842091762" watchObservedRunningTime="2026-01-21 15:27:59.173805237 +0000 UTC m=+110.864511501" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.205858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.205899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.205911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.205931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.205947 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.307938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.307980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.307989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.308003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.308012 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.410889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.410945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.410956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.411000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.411016 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.514039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.514101 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.514115 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.514140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.514189 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.617388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.617440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.617455 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.617474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.617489 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.721253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.721338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.721356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.721387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.721407 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.782857 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:59 crc kubenswrapper[4739]: E0121 15:27:59.783082 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.783349 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:59 crc kubenswrapper[4739]: E0121 15:27:59.783418 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.783549 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:59 crc kubenswrapper[4739]: E0121 15:27:59.783639 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.799996 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:40:39.085836941 +0000 UTC Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.824495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.824549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.824561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.824580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.824594 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.927671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.927717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.927729 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.927746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.927758 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.029995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.030078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.031252 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.031343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.031633 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.134050 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.134116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.134130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.134146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.134157 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.236638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.236676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.236685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.236699 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.236710 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.338901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.338935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.338944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.338957 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.338967 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.441450 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.441525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.441542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.441562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.441575 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.543923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.543959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.543970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.543987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.544000 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.646149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.646194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.646203 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.646219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.646230 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.749452 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.749541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.749551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.749567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.749577 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.781905 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:00 crc kubenswrapper[4739]: E0121 15:28:00.782058 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.801075 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:53:11.147228349 +0000 UTC Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.852418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.852466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.852486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.852510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.852525 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.954929 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.954983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.954997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.955017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.955029 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.058055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.058096 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.058110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.058129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.058140 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.160810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.161079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.161145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.161217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.161301 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.264221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.264270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.264282 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.264300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.264313 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.366993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.367034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.367045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.367061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.367072 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.469250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.469300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.469366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.469388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.469402 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.571595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.571631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.571642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.571659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.571669 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.674586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.674661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.674674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.674699 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.674715 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.777024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.777067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.777078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.777095 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.777107 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.782327 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:01 crc kubenswrapper[4739]: E0121 15:28:01.782465 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.782598 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.782686 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:01 crc kubenswrapper[4739]: E0121 15:28:01.783018 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:01 crc kubenswrapper[4739]: E0121 15:28:01.782808 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.801679 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:39:35.856293484 +0000 UTC Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.879347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.879402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.879413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.879430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.879442 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.981866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.981904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.981916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.981933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.981944 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.084214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.084258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.084267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.084280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.084292 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.186892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.186931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.186943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.186958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.186969 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.289483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.289787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.289896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.289963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.290039 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.392541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.392586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.392600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.392617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.392630 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.494907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.494962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.494973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.494989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.495000 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.598644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.598694 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.598706 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.598724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.598742 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.701375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.701631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.701756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.701865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.701947 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.782229 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:02 crc kubenswrapper[4739]: E0121 15:28:02.782363 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.802661 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:39:12.975491288 +0000 UTC Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.804337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.804453 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.804524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.804591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.804673 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.907384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.907649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.907735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.907867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.907975 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.010691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.010976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.011065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.011160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.011266 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.114119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.114178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.114189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.114221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.114234 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.216726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.216774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.216797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.216843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.216853 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.319632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.319896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.320096 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.320193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.320262 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.422907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.422952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.422967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.422987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.423002 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.525471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.525592 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.525610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.525750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.525770 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.627919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.628165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.628235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.628304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.628369 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.731437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.731518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.731539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.731632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.731655 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.781935 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.781958 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:03 crc kubenswrapper[4739]: E0121 15:28:03.782350 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.781991 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:03 crc kubenswrapper[4739]: E0121 15:28:03.782417 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:03 crc kubenswrapper[4739]: E0121 15:28:03.782360 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.804233 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:30:17.65858972 +0000 UTC Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.834401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.834437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.834446 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.834460 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.834469 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.937017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.937065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.937076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.937094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.937105 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.040420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.040467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.040479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.040503 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.040513 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.143489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.143529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.143539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.143554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.143564 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.245710 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.245747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.245757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.245771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.245781 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.348862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.348919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.348931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.348947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.348959 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.451104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.451161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.451170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.451182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.451192 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.554643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.554694 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.554717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.554745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.554767 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.578238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.578292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.578304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.578319 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.578367 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.620687 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v"] Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.621309 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.623460 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.623459 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.623496 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.625526 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.627714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2bbaa74-fc02-4130-aec7-49b9922e6af7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.628187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2bbaa74-fc02-4130-aec7-49b9922e6af7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.628454 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.628744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.629076 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bbaa74-fc02-4130-aec7-49b9922e6af7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.650685 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=90.650670136 podStartE2EDuration="1m30.650670136s" podCreationTimestamp="2026-01-21 15:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.649104356 +0000 UTC m=+116.339810630" watchObservedRunningTime="2026-01-21 15:28:04.650670136 +0000 UTC m=+116.341376400" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.665462 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.665422707 podStartE2EDuration="1m28.665422707s" podCreationTimestamp="2026-01-21 15:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.664859413 +0000 UTC m=+116.355565717" watchObservedRunningTime="2026-01-21 15:28:04.665422707 +0000 UTC m=+116.356128971" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.678319 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.678281879 podStartE2EDuration="1m3.678281879s" podCreationTimestamp="2026-01-21 15:27:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.6779536 +0000 UTC m=+116.368659884" watchObservedRunningTime="2026-01-21 15:28:04.678281879 +0000 UTC m=+116.368988153" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.730577 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2bbaa74-fc02-4130-aec7-49b9922e6af7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.730620 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.730662 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.730687 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bbaa74-fc02-4130-aec7-49b9922e6af7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.730736 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2bbaa74-fc02-4130-aec7-49b9922e6af7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.731213 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.731297 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.731952 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2bbaa74-fc02-4130-aec7-49b9922e6af7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.740412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bbaa74-fc02-4130-aec7-49b9922e6af7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.741564 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" podStartSLOduration=92.741549542 podStartE2EDuration="1m32.741549542s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.739838848 +0000 UTC m=+116.430545162" watchObservedRunningTime="2026-01-21 15:28:04.741549542 +0000 UTC m=+116.432255806" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.758142 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2bbaa74-fc02-4130-aec7-49b9922e6af7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.783377 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:04 crc kubenswrapper[4739]: E0121 15:28:04.783556 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.784503 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.78448777 podStartE2EDuration="28.78448777s" podCreationTimestamp="2026-01-21 15:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.784358097 +0000 UTC m=+116.475064371" watchObservedRunningTime="2026-01-21 15:28:04.78448777 +0000 UTC m=+116.475194034" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.804448 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:15:07.404863379 +0000 UTC Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.804534 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.806241 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ppn47" podStartSLOduration=92.806226572 podStartE2EDuration="1m32.806226572s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.805624466 +0000 UTC m=+116.496330730" watchObservedRunningTime="2026-01-21 15:28:04.806226572 +0000 UTC m=+116.496932836" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.811515 4739 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.935274 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.658966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" event={"ID":"b2bbaa74-fc02-4130-aec7-49b9922e6af7","Type":"ContainerStarted","Data":"bdf2138e60c23fb8635fde97123b83fd9eb18a358fc95a47758129e6da4e67d7"} Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.659310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" event={"ID":"b2bbaa74-fc02-4130-aec7-49b9922e6af7","Type":"ContainerStarted","Data":"ac0fff1441797c2666736686c670fa61092b686fbb3643e4bf78b03e6cedf8a7"} Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.781868 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.781892 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.781892 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:05 crc kubenswrapper[4739]: E0121 15:28:05.782027 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:05 crc kubenswrapper[4739]: E0121 15:28:05.782131 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:05 crc kubenswrapper[4739]: E0121 15:28:05.782429 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.782742 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:28:05 crc kubenswrapper[4739]: E0121 15:28:05.782907 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:28:06 crc kubenswrapper[4739]: I0121 15:28:06.782799 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:06 crc kubenswrapper[4739]: E0121 15:28:06.782983 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:07 crc kubenswrapper[4739]: I0121 15:28:07.781983 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:07 crc kubenswrapper[4739]: I0121 15:28:07.782058 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:07 crc kubenswrapper[4739]: E0121 15:28:07.782126 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:07 crc kubenswrapper[4739]: E0121 15:28:07.782197 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:07 crc kubenswrapper[4739]: I0121 15:28:07.782284 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:07 crc kubenswrapper[4739]: E0121 15:28:07.782344 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:08 crc kubenswrapper[4739]: E0121 15:28:08.762720 4739 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 15:28:08 crc kubenswrapper[4739]: I0121 15:28:08.782143 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:08 crc kubenswrapper[4739]: E0121 15:28:08.784145 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:09 crc kubenswrapper[4739]: E0121 15:28:09.105870 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.671505 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/1.log" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.672113 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/0.log" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.672168 4739 generic.go:334] "Generic (PLEG): container finished" podID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" containerID="a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935" exitCode=1 Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.672197 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerDied","Data":"a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935"} Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.672227 4739 scope.go:117] "RemoveContainer" containerID="851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.672539 4739 scope.go:117] "RemoveContainer" containerID="a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935" Jan 21 15:28:09 crc kubenswrapper[4739]: E0121 15:28:09.672666 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mqkjd_openshift-multus(38471118-ae5e-4d28-87b8-c3a5c6cc5267)\"" pod="openshift-multus/multus-mqkjd" podUID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.693102 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" podStartSLOduration=97.693086249 podStartE2EDuration="1m37.693086249s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:05.674591517 +0000 UTC m=+117.365297801" watchObservedRunningTime="2026-01-21 15:28:09.693086249 +0000 UTC m=+121.383792513" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.782244 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.782260 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:09 crc kubenswrapper[4739]: E0121 15:28:09.782430 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:09 crc kubenswrapper[4739]: E0121 15:28:09.782497 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.782257 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:09 crc kubenswrapper[4739]: E0121 15:28:09.782571 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:10 crc kubenswrapper[4739]: I0121 15:28:10.677030 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/1.log" Jan 21 15:28:10 crc kubenswrapper[4739]: I0121 15:28:10.782173 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:10 crc kubenswrapper[4739]: E0121 15:28:10.782318 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:11 crc kubenswrapper[4739]: I0121 15:28:11.782702 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:11 crc kubenswrapper[4739]: I0121 15:28:11.782740 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:11 crc kubenswrapper[4739]: I0121 15:28:11.782861 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:11 crc kubenswrapper[4739]: E0121 15:28:11.782854 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:11 crc kubenswrapper[4739]: E0121 15:28:11.782978 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:11 crc kubenswrapper[4739]: E0121 15:28:11.783021 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:12 crc kubenswrapper[4739]: I0121 15:28:12.781895 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:12 crc kubenswrapper[4739]: E0121 15:28:12.782138 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:13 crc kubenswrapper[4739]: I0121 15:28:13.781961 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:13 crc kubenswrapper[4739]: I0121 15:28:13.781979 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:13 crc kubenswrapper[4739]: E0121 15:28:13.783105 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:13 crc kubenswrapper[4739]: E0121 15:28:13.783180 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:13 crc kubenswrapper[4739]: I0121 15:28:13.782040 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:13 crc kubenswrapper[4739]: E0121 15:28:13.783275 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:14 crc kubenswrapper[4739]: E0121 15:28:14.107405 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:28:14 crc kubenswrapper[4739]: I0121 15:28:14.782346 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:14 crc kubenswrapper[4739]: E0121 15:28:14.782465 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:15 crc kubenswrapper[4739]: I0121 15:28:15.782217 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:15 crc kubenswrapper[4739]: I0121 15:28:15.782218 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:15 crc kubenswrapper[4739]: I0121 15:28:15.782248 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:15 crc kubenswrapper[4739]: E0121 15:28:15.783065 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:15 crc kubenswrapper[4739]: E0121 15:28:15.783102 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:15 crc kubenswrapper[4739]: E0121 15:28:15.783171 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:16 crc kubenswrapper[4739]: I0121 15:28:16.782760 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:16 crc kubenswrapper[4739]: E0121 15:28:16.782928 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:17 crc kubenswrapper[4739]: I0121 15:28:17.782001 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:17 crc kubenswrapper[4739]: I0121 15:28:17.782115 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:17 crc kubenswrapper[4739]: E0121 15:28:17.782149 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:17 crc kubenswrapper[4739]: E0121 15:28:17.782264 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:17 crc kubenswrapper[4739]: I0121 15:28:17.782447 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:17 crc kubenswrapper[4739]: E0121 15:28:17.782546 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:18 crc kubenswrapper[4739]: I0121 15:28:18.782727 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:18 crc kubenswrapper[4739]: E0121 15:28:18.784650 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:18 crc kubenswrapper[4739]: I0121 15:28:18.785773 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:28:18 crc kubenswrapper[4739]: E0121 15:28:18.786075 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:28:19 crc kubenswrapper[4739]: E0121 15:28:19.108211 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:28:19 crc kubenswrapper[4739]: I0121 15:28:19.782479 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:19 crc kubenswrapper[4739]: I0121 15:28:19.782560 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:19 crc kubenswrapper[4739]: I0121 15:28:19.782479 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:19 crc kubenswrapper[4739]: E0121 15:28:19.782624 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:19 crc kubenswrapper[4739]: E0121 15:28:19.783006 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:19 crc kubenswrapper[4739]: E0121 15:28:19.783134 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:20 crc kubenswrapper[4739]: I0121 15:28:20.782214 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:20 crc kubenswrapper[4739]: E0121 15:28:20.782449 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:21 crc kubenswrapper[4739]: I0121 15:28:21.781975 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:21 crc kubenswrapper[4739]: E0121 15:28:21.782360 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:21 crc kubenswrapper[4739]: I0121 15:28:21.782156 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:21 crc kubenswrapper[4739]: E0121 15:28:21.782437 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:21 crc kubenswrapper[4739]: I0121 15:28:21.782054 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:21 crc kubenswrapper[4739]: E0121 15:28:21.782563 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:22 crc kubenswrapper[4739]: I0121 15:28:22.782294 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:22 crc kubenswrapper[4739]: E0121 15:28:22.782561 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:23 crc kubenswrapper[4739]: I0121 15:28:23.782439 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:23 crc kubenswrapper[4739]: I0121 15:28:23.782514 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:23 crc kubenswrapper[4739]: E0121 15:28:23.782580 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:23 crc kubenswrapper[4739]: I0121 15:28:23.782445 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:23 crc kubenswrapper[4739]: E0121 15:28:23.782696 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:23 crc kubenswrapper[4739]: E0121 15:28:23.783014 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:24 crc kubenswrapper[4739]: E0121 15:28:24.110120 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:28:24 crc kubenswrapper[4739]: I0121 15:28:24.782217 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:24 crc kubenswrapper[4739]: E0121 15:28:24.782384 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:24 crc kubenswrapper[4739]: I0121 15:28:24.782805 4739 scope.go:117] "RemoveContainer" containerID="a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935" Jan 21 15:28:25 crc kubenswrapper[4739]: I0121 15:28:25.722861 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/1.log" Jan 21 15:28:25 crc kubenswrapper[4739]: I0121 15:28:25.722911 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerStarted","Data":"a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520"} Jan 21 15:28:25 crc kubenswrapper[4739]: I0121 15:28:25.782515 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:25 crc kubenswrapper[4739]: E0121 15:28:25.782879 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:25 crc kubenswrapper[4739]: I0121 15:28:25.782666 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:25 crc kubenswrapper[4739]: I0121 15:28:25.782571 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:25 crc kubenswrapper[4739]: E0121 15:28:25.783764 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:25 crc kubenswrapper[4739]: E0121 15:28:25.784227 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:26 crc kubenswrapper[4739]: I0121 15:28:26.782070 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:26 crc kubenswrapper[4739]: E0121 15:28:26.782406 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:27 crc kubenswrapper[4739]: I0121 15:28:27.782028 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:27 crc kubenswrapper[4739]: I0121 15:28:27.782072 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:27 crc kubenswrapper[4739]: I0121 15:28:27.782043 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:27 crc kubenswrapper[4739]: E0121 15:28:27.782231 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:27 crc kubenswrapper[4739]: E0121 15:28:27.782346 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:27 crc kubenswrapper[4739]: E0121 15:28:27.782441 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:28 crc kubenswrapper[4739]: I0121 15:28:28.782337 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:28 crc kubenswrapper[4739]: E0121 15:28:28.783513 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:29 crc kubenswrapper[4739]: E0121 15:28:29.110594 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:28:29 crc kubenswrapper[4739]: I0121 15:28:29.781872 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:29 crc kubenswrapper[4739]: I0121 15:28:29.781950 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:29 crc kubenswrapper[4739]: I0121 15:28:29.782018 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:29 crc kubenswrapper[4739]: E0121 15:28:29.782367 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:29 crc kubenswrapper[4739]: E0121 15:28:29.782425 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:29 crc kubenswrapper[4739]: E0121 15:28:29.782472 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:29 crc kubenswrapper[4739]: I0121 15:28:29.782744 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.557499 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mwzx6"] Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.742192 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/3.log" Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.745062 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.745066 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"37819e13f645c7f0f0412c6dba12fc37fc3f57ddc88bd6558fe06b57e6a1c752"} Jan 21 15:28:30 crc kubenswrapper[4739]: E0121 15:28:30.745159 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.745682 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.782474 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:30 crc kubenswrapper[4739]: E0121 15:28:30.782620 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:31 crc kubenswrapper[4739]: I0121 15:28:31.782143 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:31 crc kubenswrapper[4739]: I0121 15:28:31.782181 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:31 crc kubenswrapper[4739]: E0121 15:28:31.782657 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:31 crc kubenswrapper[4739]: E0121 15:28:31.782883 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:32 crc kubenswrapper[4739]: I0121 15:28:32.782290 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:32 crc kubenswrapper[4739]: I0121 15:28:32.782338 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:32 crc kubenswrapper[4739]: E0121 15:28:32.782411 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:32 crc kubenswrapper[4739]: E0121 15:28:32.782504 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:33 crc kubenswrapper[4739]: I0121 15:28:33.782367 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:33 crc kubenswrapper[4739]: I0121 15:28:33.782447 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:33 crc kubenswrapper[4739]: E0121 15:28:33.782517 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:33 crc kubenswrapper[4739]: E0121 15:28:33.782594 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.782328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.782418 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.785339 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.785370 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.785487 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.785780 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.223288 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.223356 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.331045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.361753 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podStartSLOduration=123.36173263 podStartE2EDuration="2m3.36173263s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:30.776432817 +0000 UTC m=+142.467139081" watchObservedRunningTime="2026-01-21 15:28:35.36173263 +0000 UTC m=+147.052438894" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.362740 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4zjzq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.363301 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.368423 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.369037 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.369068 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.369140 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.369151 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.369209 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.385031 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mrnp9"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.385381 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jbgcq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.385771 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.385941 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.385984 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.386665 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.388292 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.391553 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.391921 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.392196 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.392320 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.392436 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.392483 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.392694 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.393040 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.393381 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.399571 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.399962 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.402440 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.408166 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.408761 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.409294 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.409652 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.409940 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.410110 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.410289 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.410673 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.410866 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.411963 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.413175 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.416964 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.417733 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.424078 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g47s4"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.424635 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.444348 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.444866 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.448108 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.448116 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.448400 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.456275 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.456488 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.456620 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.456734 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.456887 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.457330 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.457874 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.458416 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.458927 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.459298 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.459725 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.468984 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.472673 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.480314 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.480739 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.480796 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vdvrk"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.481114 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xfwnt"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.481447 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.481761 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.482077 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.483763 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.484690 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.485018 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.485916 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.486331 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.486457 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.486559 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.487067 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.488117 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.488530 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.489432 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qqgkc"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.489790 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.491110 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.491405 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.496594 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497160 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-service-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497251 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497312 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497365 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2abd630c-c811-40dd-93e4-84a916d7ea27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497458 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-client\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497540 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-encryption-config\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497609 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2b58\" (UniqueName: \"kubernetes.io/projected/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-kube-api-access-p2b58\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497887 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.498411 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.498420 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6lhh\" (UniqueName: \"kubernetes.io/projected/e4636c77-494f-4cea-84e2-456167b5e771-kube-api-access-c6lhh\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.498448 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit-dir\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.498637 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-audit-policies\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.500968 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.501438 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.501984 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gw4z7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.502657 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.504253 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.504527 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.504695 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.504789 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.504996 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.505144 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.505261 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bwj8\" (UniqueName: \"kubernetes.io/projected/079963dd-bb7d-472a-8af1-0f5386c5f32b-kube-api-access-5bwj8\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.505366 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7cd1565-a272-48a7-bc63-b61518f16400-audit-dir\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507661 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-encryption-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507702 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4636c77-494f-4cea-84e2-456167b5e771-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507729 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-serving-cert\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507770 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjvk8\" (UniqueName: \"kubernetes.io/projected/2abd630c-c811-40dd-93e4-84a916d7ea27-kube-api-access-qjvk8\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507797 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqqj\" (UniqueName: \"kubernetes.io/projected/e7cd1565-a272-48a7-bc63-b61518f16400-kube-api-access-7pqqj\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507844 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507878 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03c04a1d-2207-466b-8732-7e90b2abd45a-serving-cert\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507948 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-node-pullsecrets\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508050 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-config\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508074 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-config\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508120 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-serving-cert\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508149 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpr2f\" (UniqueName: \"kubernetes.io/projected/03c04a1d-2207-466b-8732-7e90b2abd45a-kube-api-access-zpr2f\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-image-import-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508203 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508220 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-images\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508235 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-auth-proxy-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508276 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-machine-approver-tls\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508292 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-client\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508322 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-serving-cert\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508340 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46h5g\" (UniqueName: \"kubernetes.io/projected/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-kube-api-access-46h5g\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508385 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-serving-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.509707 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hm72p"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.510746 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.514287 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.514708 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.517017 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.517224 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.517588 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.517735 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.517928 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.518066 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.518269 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.518461 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.518633 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.519024 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.519263 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.521394 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k4fwk"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.521836 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.522138 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.522373 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.522502 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.522688 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.523741 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524054 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524069 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524215 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524259 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524301 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524416 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524458 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524530 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524563 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524682 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524706 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.525145 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.525316 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.525477 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.525625 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.525783 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.527096 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.528000 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.528158 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.528451 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.528603 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.528844 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.529093 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.529279 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.529439 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.532965 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.533478 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.535231 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.535443 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.537102 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.539634 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lzrxp"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.553509 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.558169 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.561100 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.563543 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.587555 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.591209 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.598290 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.598968 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.599430 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.600226 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.600628 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.603995 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.601594 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.620310 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.602907 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.604679 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.611362 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.620928 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.611915 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621165 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs5tr\" (UniqueName: \"kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621202 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4636c77-494f-4cea-84e2-456167b5e771-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621227 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-serving-cert\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjvk8\" (UniqueName: \"kubernetes.io/projected/2abd630c-c811-40dd-93e4-84a916d7ea27-kube-api-access-qjvk8\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621317 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqqj\" (UniqueName: \"kubernetes.io/projected/e7cd1565-a272-48a7-bc63-b61518f16400-kube-api-access-7pqqj\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qmwf\" (UniqueName: \"kubernetes.io/projected/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-kube-api-access-7qmwf\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621372 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621396 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03c04a1d-2207-466b-8732-7e90b2abd45a-serving-cert\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621420 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-node-pullsecrets\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.620850 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621448 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkhxg\" (UniqueName: \"kubernetes.io/projected/f99aadf5-6fdc-42b5-937c-4792f24882ce-kube-api-access-vkhxg\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-serving-cert\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621493 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-config\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621519 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621542 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-config\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621595 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpr2f\" (UniqueName: \"kubernetes.io/projected/03c04a1d-2207-466b-8732-7e90b2abd45a-kube-api-access-zpr2f\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-srv-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621648 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-image-import-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqncd\" (UniqueName: \"kubernetes.io/projected/97e7a4a3-f7f2-4059-8705-20acd838d431-kube-api-access-cqncd\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621718 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621745 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-images\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621769 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-auth-proxy-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621809 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-machine-approver-tls\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621855 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-client\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621883 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wtd9\" (UniqueName: \"kubernetes.io/projected/348f800b-2552-4315-9b58-a679d8d8b6f3-kube-api-access-5wtd9\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621908 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-serving-cert\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621931 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621961 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46h5g\" (UniqueName: \"kubernetes.io/projected/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-kube-api-access-46h5g\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621990 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-serving-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621999 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-config\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622052 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-config\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622076 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsvp9\" (UniqueName: \"kubernetes.io/projected/77b5b7f5-050a-4013-9d21-fdfae7128b21-kube-api-access-zsvp9\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622099 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622120 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622143 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622168 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622193 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622218 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-service-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622242 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622272 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622297 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622320 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622371 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97e7a4a3-f7f2-4059-8705-20acd838d431-metrics-tls\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622398 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622398 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qq6x\" (UniqueName: \"kubernetes.io/projected/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-kube-api-access-8qq6x\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622462 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2abd630c-c811-40dd-93e4-84a916d7ea27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622481 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622634 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-client\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622659 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622682 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdv4p\" (UniqueName: \"kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622726 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-profile-collector-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622747 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229fm\" (UniqueName: \"kubernetes.io/projected/7b7d9bcd-b091-4811-9196-cc6c20bab78c-kube-api-access-229fm\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622780 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622802 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622852 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622879 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-encryption-config\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2b58\" (UniqueName: \"kubernetes.io/projected/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-kube-api-access-p2b58\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77b5b7f5-050a-4013-9d21-fdfae7128b21-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622952 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6lhh\" (UniqueName: \"kubernetes.io/projected/e4636c77-494f-4cea-84e2-456167b5e771-kube-api-access-c6lhh\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622976 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622996 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-serving-cert\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-serving-cert\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-srv-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623065 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit-dir\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623086 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623106 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-client\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623124 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623144 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-audit-policies\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623185 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-service-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623210 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bwj8\" (UniqueName: \"kubernetes.io/projected/079963dd-bb7d-472a-8af1-0f5386c5f32b-kube-api-access-5bwj8\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623230 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623249 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b5b7f5-050a-4013-9d21-fdfae7128b21-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623272 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdkt\" (UniqueName: \"kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623291 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7cd1565-a272-48a7-bc63-b61518f16400-audit-dir\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623311 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623332 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-trusted-ca\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-encryption-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623376 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhgtb\" (UniqueName: \"kubernetes.io/projected/be284180-78a3-4a18-86b3-37d08ab06390-kube-api-access-lhgtb\") pod \"downloads-7954f5f757-xfwnt\" (UID: \"be284180-78a3-4a18-86b3-37d08ab06390\") " pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623475 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623670 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.611416 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.611468 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.625611 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.626373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-service-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.627237 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-serving-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.628243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.631729 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.632182 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.632634 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.633076 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.633264 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.634210 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-config\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.635038 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.636170 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-node-pullsecrets\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.638286 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.638678 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.639976 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.640685 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.640779 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-image-import-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.640990 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.641173 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit-dir\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.641445 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-client\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.641738 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-audit-policies\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.641791 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7cd1565-a272-48a7-bc63-b61518f16400-audit-dir\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.642017 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wj45p"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.642034 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.642776 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.642976 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4zjzq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.643363 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-images\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.644058 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-config\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.644378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-auth-proxy-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.644769 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03c04a1d-2207-466b-8732-7e90b2abd45a-serving-cert\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.655211 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.655917 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-serving-cert\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.659521 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2abd630c-c811-40dd-93e4-84a916d7ea27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.661865 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4636c77-494f-4cea-84e2-456167b5e771-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.662873 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.664156 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-serving-cert\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.665137 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-encryption-config\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.668799 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.672121 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.672958 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.678977 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.680162 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.680891 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-encryption-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.682884 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-machine-approver-tls\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.682955 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.683008 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jbgcq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.689721 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mrnp9"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.692155 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g47s4"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.692872 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-client\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.692930 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.693255 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.693844 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.696532 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.696590 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.697788 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.698931 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xfwnt"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.699379 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-serving-cert\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.699932 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.701388 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.702421 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.705071 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xg9nx"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.705373 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.706378 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.707753 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.709153 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vdvrk"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.710808 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.712093 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.713880 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-796x7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.714621 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.715338 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.718104 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.718755 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.719271 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.720921 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gw4z7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.723043 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724803 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724858 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs5tr\" (UniqueName: \"kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724890 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724923 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qmwf\" (UniqueName: \"kubernetes.io/projected/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-kube-api-access-7qmwf\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724964 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkhxg\" (UniqueName: \"kubernetes.io/projected/f99aadf5-6fdc-42b5-937c-4792f24882ce-kube-api-access-vkhxg\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725015 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-srv-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725038 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725060 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqncd\" (UniqueName: \"kubernetes.io/projected/97e7a4a3-f7f2-4059-8705-20acd838d431-kube-api-access-cqncd\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725094 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725114 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wtd9\" (UniqueName: \"kubernetes.io/projected/348f800b-2552-4315-9b58-a679d8d8b6f3-kube-api-access-5wtd9\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725153 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725178 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725201 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-config\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725223 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-config\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725244 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsvp9\" (UniqueName: \"kubernetes.io/projected/77b5b7f5-050a-4013-9d21-fdfae7128b21-kube-api-access-zsvp9\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725270 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725325 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725378 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725400 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qq6x\" (UniqueName: \"kubernetes.io/projected/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-kube-api-access-8qq6x\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725442 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97e7a4a3-f7f2-4059-8705-20acd838d431-metrics-tls\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725508 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-profile-collector-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725573 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229fm\" (UniqueName: \"kubernetes.io/projected/7b7d9bcd-b091-4811-9196-cc6c20bab78c-kube-api-access-229fm\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725595 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725639 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdv4p\" (UniqueName: \"kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725663 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725864 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77b5b7f5-050a-4013-9d21-fdfae7128b21-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725905 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-serving-cert\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725928 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-srv-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725975 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725997 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-serving-cert\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726044 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-client\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726064 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-service-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726086 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726107 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726131 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b5b7f5-050a-4013-9d21-fdfae7128b21-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726217 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdkt\" (UniqueName: \"kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726242 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726265 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-trusted-ca\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726292 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhgtb\" (UniqueName: \"kubernetes.io/projected/be284180-78a3-4a18-86b3-37d08ab06390-kube-api-access-lhgtb\") pod \"downloads-7954f5f757-xfwnt\" (UID: \"be284180-78a3-4a18-86b3-37d08ab06390\") " pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.728872 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.732053 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.732053 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.732185 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.732466 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-config\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.732646 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.733003 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.734318 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.734536 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.735772 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-trusted-ca\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.736613 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-service-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.737219 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.737534 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-config\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.737953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.737984 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.738441 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.738569 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jcttp"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.738612 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.738966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-profile-collector-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.739092 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-serving-cert\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.739324 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.739689 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.739701 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-srv-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.739841 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740426 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-srv-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740443 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-serving-cert\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740105 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p994f"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740093 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740121 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.742002 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.742083 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.742902 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.743419 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.743622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wj45p"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.743728 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.744466 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.745142 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.746008 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k4fwk"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.746589 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.747000 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.747553 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qqgkc"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.748068 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.748708 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.749880 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xg9nx"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.750295 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.750973 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.751881 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-client\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.751969 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-796x7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.753031 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.754030 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.755866 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p994f"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.757626 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.759628 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lzrxp"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.761156 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.762621 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.766182 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.781883 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.781923 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.786161 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.805364 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.824910 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.845662 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.865762 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.885555 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.913075 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.925269 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.946339 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.970070 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.985798 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.999600 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77b5b7f5-050a-4013-9d21-fdfae7128b21-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.005947 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.012108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b5b7f5-050a-4013-9d21-fdfae7128b21-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.026095 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.045214 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.066125 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.073600 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97e7a4a3-f7f2-4059-8705-20acd838d431-metrics-tls\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.086365 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.105803 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.125344 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.145404 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.165793 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.206431 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.225937 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.246396 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.266237 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.286273 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.305698 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.325195 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.346210 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.365512 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.385360 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.406370 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.426716 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.445306 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.466256 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.485083 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.505714 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.525047 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.546183 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.573441 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.585683 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.605981 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.624033 4739 request.go:700] Waited for 1.001275804s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&limit=500&resourceVersion=0 Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.626717 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.646026 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.666612 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.704468 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46h5g\" (UniqueName: \"kubernetes.io/projected/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-kube-api-access-46h5g\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.705732 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.725847 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.746037 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.766269 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.788274 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.806088 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.826054 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.846061 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.866626 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.885440 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.905734 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.914444 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.931122 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.964993 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjvk8\" (UniqueName: \"kubernetes.io/projected/2abd630c-c811-40dd-93e4-84a916d7ea27-kube-api-access-qjvk8\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.983455 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqqj\" (UniqueName: \"kubernetes.io/projected/e7cd1565-a272-48a7-bc63-b61518f16400-kube-api-access-7pqqj\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.998333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpr2f\" (UniqueName: \"kubernetes.io/projected/03c04a1d-2207-466b-8732-7e90b2abd45a-kube-api-access-zpr2f\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.024184 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bwj8\" (UniqueName: \"kubernetes.io/projected/079963dd-bb7d-472a-8af1-0f5386c5f32b-kube-api-access-5bwj8\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.025284 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.046095 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.080779 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2b58\" (UniqueName: \"kubernetes.io/projected/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-kube-api-access-p2b58\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.099081 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6lhh\" (UniqueName: \"kubernetes.io/projected/e4636c77-494f-4cea-84e2-456167b5e771-kube-api-access-c6lhh\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.106162 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.125587 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.145180 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.165620 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.181618 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.185921 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.201233 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.205793 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.206724 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.225937 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.238957 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.245390 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.260532 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.265661 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.273182 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.286326 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.305702 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.325352 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.345218 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.365277 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.386316 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.405739 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.425532 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.460027 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhgtb\" (UniqueName: \"kubernetes.io/projected/be284180-78a3-4a18-86b3-37d08ab06390-kube-api-access-lhgtb\") pod \"downloads-7954f5f757-xfwnt\" (UID: \"be284180-78a3-4a18-86b3-37d08ab06390\") " pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.510920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs5tr\" (UniqueName: \"kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.528093 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qmwf\" (UniqueName: \"kubernetes.io/projected/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-kube-api-access-7qmwf\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.541782 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkhxg\" (UniqueName: \"kubernetes.io/projected/f99aadf5-6fdc-42b5-937c-4792f24882ce-kube-api-access-vkhxg\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.545380 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.560839 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.564948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdkt\" (UniqueName: \"kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.581933 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229fm\" (UniqueName: \"kubernetes.io/projected/7b7d9bcd-b091-4811-9196-cc6c20bab78c-kube-api-access-229fm\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.600018 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsvp9\" (UniqueName: \"kubernetes.io/projected/77b5b7f5-050a-4013-9d21-fdfae7128b21-kube-api-access-zsvp9\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.604943 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.622391 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdv4p\" (UniqueName: \"kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.624136 4739 request.go:700] Waited for 1.887673653s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.641835 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqncd\" (UniqueName: \"kubernetes.io/projected/97e7a4a3-f7f2-4059-8705-20acd838d431-kube-api-access-cqncd\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.668678 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wtd9\" (UniqueName: \"kubernetes.io/projected/348f800b-2552-4315-9b58-a679d8d8b6f3-kube-api-access-5wtd9\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.680302 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qq6x\" (UniqueName: \"kubernetes.io/projected/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-kube-api-access-8qq6x\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.685660 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.698362 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.705417 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.719897 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.726484 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.737457 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.745806 4739 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.765847 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.768240 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" event={"ID":"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4","Type":"ContainerStarted","Data":"7ac5cc0555e0b07e6a31978976b1c8cc2c03762a186e8b52258613fbc2b0adad"} Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.785587 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.805705 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.812173 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.825788 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.120192 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.120935 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.121325 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.121524 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.129653 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.129713 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwjk\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.129764 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.129899 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.129987 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.130039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.130091 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.130137 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.130636 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.630619824 +0000 UTC m=+150.321326088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232052 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.232243 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.732216491 +0000 UTC m=+150.422922765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232557 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad0a47df-29cb-4412-af60-0eb3de8e4d00-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232582 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59bd4039-f143-418b-94d6-8fa9d3db77f5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232597 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzlrv\" (UniqueName: \"kubernetes.io/projected/41a5775c-2a4c-43f6-869c-9fb214de2806-kube-api-access-gzlrv\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232614 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232630 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-cabundle\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232661 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb2e8f4d-c66b-4476-90fe-925010e7e22e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232675 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-mountpoint-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232698 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232711 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb6xq\" (UniqueName: \"kubernetes.io/projected/e70b8e17-5f05-452a-9216-7593143eebae-kube-api-access-tb6xq\") pod \"migrator-59844c95c7-bfg4d\" (UID: \"e70b8e17-5f05-452a-9216-7593143eebae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232765 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-stats-auth\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232791 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb2e8f4d-c66b-4476-90fe-925010e7e22e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232806 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232834 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7vc\" (UniqueName: \"kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232858 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-apiservice-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232873 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr8bh\" (UniqueName: \"kubernetes.io/projected/aa3cda86-5932-40aa-9c01-3f95853884f9-kube-api-access-mr8bh\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232889 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c678179e-9aa8-4246-88c7-d0b23452615e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232905 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d3373de-f525-4c47-8519-679e983cc0ba-metrics-tls\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232923 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232946 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww7zw\" (UniqueName: \"kubernetes.io/projected/114b5947-30d6-4a6b-a1c6-1b1f75888037-kube-api-access-ww7zw\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232972 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232990 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vn9j\" (UniqueName: \"kubernetes.io/projected/635cd233-be60-44f6-b899-1d283e383a5f-kube-api-access-7vn9j\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.233634 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.733626359 +0000 UTC m=+150.424332623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.233998 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-registration-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c678179e-9aa8-4246-88c7-d0b23452615e-config\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234078 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzbs\" (UniqueName: \"kubernetes.io/projected/c3e32932-afd4-4e36-8b07-1c6741c86bbd-kube-api-access-8nzbs\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234094 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-node-bootstrap-token\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234146 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2bh\" (UniqueName: \"kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234161 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/114b5947-30d6-4a6b-a1c6-1b1f75888037-tmpfs\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234175 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndds5\" (UniqueName: \"kubernetes.io/projected/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-kube-api-access-ndds5\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f7a893-ca61-4fee-ad9d-d5c779092226-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234463 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-images\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234642 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.236245 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.236590 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnj69\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-kube-api-access-jnj69\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237454 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e32932-afd4-4e36-8b07-1c6741c86bbd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237521 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237603 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-webhook-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237654 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61310358-52da-4a4b-bcfd-4f68340d64c3-metrics-tls\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238371 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-plugins-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238456 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238494 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238596 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-default-certificate\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238641 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vht9g\" (UniqueName: \"kubernetes.io/projected/61310358-52da-4a4b-bcfd-4f68340d64c3-kube-api-access-vht9g\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.239349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.239374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-socket-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.239917 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d3373de-f525-4c47-8519-679e983cc0ba-trusted-ca\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.239936 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c678179e-9aa8-4246-88c7-d0b23452615e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240188 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3085f19-d556-4022-a16d-13c66c1d57d1-service-ca-bundle\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240232 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82e0a5a3-17e1-4a27-a30a-998b20238558-cert\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240255 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m65nj\" (UniqueName: \"kubernetes.io/projected/0bdb427a-96c7-4be9-8d54-c0926d447a36-kube-api-access-m65nj\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240279 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240299 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-certs\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240325 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240384 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwc5b\" (UniqueName: \"kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240412 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240434 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240458 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgwjk\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240480 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-serving-cert\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240552 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-952nb\" (UniqueName: \"kubernetes.io/projected/59bd4039-f143-418b-94d6-8fa9d3db77f5-kube-api-access-952nb\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240574 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pn58\" (UniqueName: \"kubernetes.io/projected/82e0a5a3-17e1-4a27-a30a-998b20238558-kube-api-access-4pn58\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240596 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240632 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240664 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240684 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-key\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240730 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240773 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5zzv\" (UniqueName: \"kubernetes.io/projected/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-kube-api-access-v5zzv\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240838 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg2fx\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-kube-api-access-dg2fx\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240862 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f7a893-ca61-4fee-ad9d-d5c779092226-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240909 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-config\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240944 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240966 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61310358-52da-4a4b-bcfd-4f68340d64c3-config-volume\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243200 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-metrics-certs\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243325 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad0a47df-29cb-4412-af60-0eb3de8e4d00-proxy-tls\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb2e8f4d-c66b-4476-90fe-925010e7e22e-config\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-csi-data-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnlzs\" (UniqueName: \"kubernetes.io/projected/ad0a47df-29cb-4412-af60-0eb3de8e4d00-kube-api-access-vnlzs\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243614 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/635cd233-be60-44f6-b899-1d283e383a5f-proxy-tls\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243640 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243698 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrppd\" (UniqueName: \"kubernetes.io/projected/c3085f19-d556-4022-a16d-13c66c1d57d1-kube-api-access-vrppd\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22c4\" (UniqueName: \"kubernetes.io/projected/e1f7a893-ca61-4fee-ad9d-d5c779092226-kube-api-access-z22c4\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.244226 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.245415 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.253087 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.313871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.314235 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgwjk\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.344831 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e32932-afd4-4e36-8b07-1c6741c86bbd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345133 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-webhook-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345157 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61310358-52da-4a4b-bcfd-4f68340d64c3-metrics-tls\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345181 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-plugins-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345229 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345250 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345270 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vht9g\" (UniqueName: \"kubernetes.io/projected/61310358-52da-4a4b-bcfd-4f68340d64c3-kube-api-access-vht9g\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-default-certificate\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345319 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345342 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-socket-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345367 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d3373de-f525-4c47-8519-679e983cc0ba-trusted-ca\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345389 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3085f19-d556-4022-a16d-13c66c1d57d1-service-ca-bundle\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345413 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c678179e-9aa8-4246-88c7-d0b23452615e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345444 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m65nj\" (UniqueName: \"kubernetes.io/projected/0bdb427a-96c7-4be9-8d54-c0926d447a36-kube-api-access-m65nj\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345465 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82e0a5a3-17e1-4a27-a30a-998b20238558-cert\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345486 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345507 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-certs\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345527 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345549 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwc5b\" (UniqueName: \"kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345630 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-serving-cert\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345652 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-952nb\" (UniqueName: \"kubernetes.io/projected/59bd4039-f143-418b-94d6-8fa9d3db77f5-kube-api-access-952nb\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345674 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pn58\" (UniqueName: \"kubernetes.io/projected/82e0a5a3-17e1-4a27-a30a-998b20238558-kube-api-access-4pn58\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345695 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345720 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345741 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-key\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345762 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345789 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5zzv\" (UniqueName: \"kubernetes.io/projected/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-kube-api-access-v5zzv\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345847 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-config\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345870 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg2fx\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-kube-api-access-dg2fx\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f7a893-ca61-4fee-ad9d-d5c779092226-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345913 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345936 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61310358-52da-4a4b-bcfd-4f68340d64c3-config-volume\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345960 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-metrics-certs\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345960 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-socket-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345982 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad0a47df-29cb-4412-af60-0eb3de8e4d00-proxy-tls\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.346011 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb2e8f4d-c66b-4476-90fe-925010e7e22e-config\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.346073 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.846054657 +0000 UTC m=+150.536760931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.348362 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d3373de-f525-4c47-8519-679e983cc0ba-trusted-ca\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.349462 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.349597 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3085f19-d556-4022-a16d-13c66c1d57d1-service-ca-bundle\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.349649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.350484 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-config\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.352649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.353111 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.359607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.360553 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e32932-afd4-4e36-8b07-1c6741c86bbd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.361117 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-webhook-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.363623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.363760 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/635cd233-be60-44f6-b899-1d283e383a5f-proxy-tls\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-csi-data-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364120 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnlzs\" (UniqueName: \"kubernetes.io/projected/ad0a47df-29cb-4412-af60-0eb3de8e4d00-kube-api-access-vnlzs\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrppd\" (UniqueName: \"kubernetes.io/projected/c3085f19-d556-4022-a16d-13c66c1d57d1-kube-api-access-vrppd\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364218 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z22c4\" (UniqueName: \"kubernetes.io/projected/e1f7a893-ca61-4fee-ad9d-d5c779092226-kube-api-access-z22c4\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364556 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f7a893-ca61-4fee-ad9d-d5c779092226-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364950 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61310358-52da-4a4b-bcfd-4f68340d64c3-config-volume\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.365106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-csi-data-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.365154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-plugins-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzlrv\" (UniqueName: \"kubernetes.io/projected/41a5775c-2a4c-43f6-869c-9fb214de2806-kube-api-access-gzlrv\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367662 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad0a47df-29cb-4412-af60-0eb3de8e4d00-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367751 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59bd4039-f143-418b-94d6-8fa9d3db77f5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367842 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-cabundle\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367927 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-mountpoint-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367998 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb2e8f4d-c66b-4476-90fe-925010e7e22e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368071 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368144 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb6xq\" (UniqueName: \"kubernetes.io/projected/e70b8e17-5f05-452a-9216-7593143eebae-kube-api-access-tb6xq\") pod \"migrator-59844c95c7-bfg4d\" (UID: \"e70b8e17-5f05-452a-9216-7593143eebae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368240 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368329 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-stats-auth\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368414 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb2e8f4d-c66b-4476-90fe-925010e7e22e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368488 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368562 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7vc\" (UniqueName: \"kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368633 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c678179e-9aa8-4246-88c7-d0b23452615e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368710 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-apiservice-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368779 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr8bh\" (UniqueName: \"kubernetes.io/projected/aa3cda86-5932-40aa-9c01-3f95853884f9-kube-api-access-mr8bh\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368870 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d3373de-f525-4c47-8519-679e983cc0ba-metrics-tls\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368944 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww7zw\" (UniqueName: \"kubernetes.io/projected/114b5947-30d6-4a6b-a1c6-1b1f75888037-kube-api-access-ww7zw\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369102 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369198 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vn9j\" (UniqueName: \"kubernetes.io/projected/635cd233-be60-44f6-b899-1d283e383a5f-kube-api-access-7vn9j\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-registration-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369381 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c678179e-9aa8-4246-88c7-d0b23452615e-config\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369459 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nzbs\" (UniqueName: \"kubernetes.io/projected/c3e32932-afd4-4e36-8b07-1c6741c86bbd-kube-api-access-8nzbs\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369528 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-node-bootstrap-token\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt2bh\" (UniqueName: \"kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/114b5947-30d6-4a6b-a1c6-1b1f75888037-tmpfs\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368807 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-cabundle\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.370274 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-registration-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.370864 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndds5\" (UniqueName: \"kubernetes.io/projected/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-kube-api-access-ndds5\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.370982 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f7a893-ca61-4fee-ad9d-d5c779092226-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.371083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.371177 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-images\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.371270 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnj69\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-kube-api-access-jnj69\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.371348 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.372381 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.372895 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.872878498 +0000 UTC m=+150.563584852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.373085 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c678179e-9aa8-4246-88c7-d0b23452615e-config\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.373337 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-mountpoint-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.374095 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad0a47df-29cb-4412-af60-0eb3de8e4d00-proxy-tls\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.374908 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82e0a5a3-17e1-4a27-a30a-998b20238558-cert\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.375251 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/114b5947-30d6-4a6b-a1c6-1b1f75888037-tmpfs\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.377672 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-images\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368176 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb2e8f4d-c66b-4476-90fe-925010e7e22e-config\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.371016 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad0a47df-29cb-4412-af60-0eb3de8e4d00-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.379045 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.382244 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.382577 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.383248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c678179e-9aa8-4246-88c7-d0b23452615e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.389734 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-default-certificate\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.392597 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-key\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.394159 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-metrics-certs\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.395687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.396846 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d3373de-f525-4c47-8519-679e983cc0ba-metrics-tls\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.397235 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.397277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg2fx\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-kube-api-access-dg2fx\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.397756 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-serving-cert\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.398307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59bd4039-f143-418b-94d6-8fa9d3db77f5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.399865 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-certs\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.400726 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/635cd233-be60-44f6-b899-1d283e383a5f-proxy-tls\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.401030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.401136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61310358-52da-4a4b-bcfd-4f68340d64c3-metrics-tls\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.401236 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-node-bootstrap-token\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.401637 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f7a893-ca61-4fee-ad9d-d5c779092226-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.402012 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-apiservice-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.403400 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb2e8f4d-c66b-4476-90fe-925010e7e22e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.403947 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-stats-auth\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.405966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pn58\" (UniqueName: \"kubernetes.io/projected/82e0a5a3-17e1-4a27-a30a-998b20238558-kube-api-access-4pn58\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.448324 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5zzv\" (UniqueName: \"kubernetes.io/projected/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-kube-api-access-v5zzv\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.459739 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-952nb\" (UniqueName: \"kubernetes.io/projected/59bd4039-f143-418b-94d6-8fa9d3db77f5-kube-api-access-952nb\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.472910 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.473106 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.973002426 +0000 UTC m=+150.663708690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.473295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.473778 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.973767187 +0000 UTC m=+150.664473501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.480754 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m65nj\" (UniqueName: \"kubernetes.io/projected/0bdb427a-96c7-4be9-8d54-c0926d447a36-kube-api-access-m65nj\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.500096 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwc5b\" (UniqueName: \"kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.511433 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.537860 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vht9g\" (UniqueName: \"kubernetes.io/projected/61310358-52da-4a4b-bcfd-4f68340d64c3-kube-api-access-vht9g\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.546507 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.565809 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.574346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.574497 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.074475861 +0000 UTC m=+150.765182125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.574714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.575717 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.075706673 +0000 UTC m=+150.766412937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.579942 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.588206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnlzs\" (UniqueName: \"kubernetes.io/projected/ad0a47df-29cb-4412-af60-0eb3de8e4d00-kube-api-access-vnlzs\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.601532 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrppd\" (UniqueName: \"kubernetes.io/projected/c3085f19-d556-4022-a16d-13c66c1d57d1-kube-api-access-vrppd\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.609083 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.629888 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.635841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z22c4\" (UniqueName: \"kubernetes.io/projected/e1f7a893-ca61-4fee-ad9d-d5c779092226-kube-api-access-z22c4\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.648211 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzlrv\" (UniqueName: \"kubernetes.io/projected/41a5775c-2a4c-43f6-869c-9fb214de2806-kube-api-access-gzlrv\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.661178 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr8bh\" (UniqueName: \"kubernetes.io/projected/aa3cda86-5932-40aa-9c01-3f95853884f9-kube-api-access-mr8bh\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.670627 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.687739 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.688218 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.188200835 +0000 UTC m=+150.878907099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.717608 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww7zw\" (UniqueName: \"kubernetes.io/projected/114b5947-30d6-4a6b-a1c6-1b1f75888037-kube-api-access-ww7zw\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.720531 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nzbs\" (UniqueName: \"kubernetes.io/projected/c3e32932-afd4-4e36-8b07-1c6741c86bbd-kube-api-access-8nzbs\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.759301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.769607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt2bh\" (UniqueName: \"kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.777437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" event={"ID":"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4","Type":"ContainerStarted","Data":"b896fd37c22a8b07cf395936f362322d6982236110e3d3bfe51ad5cc5e831099"} Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.777479 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" event={"ID":"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4","Type":"ContainerStarted","Data":"a3abeec588a50be7d868efbedbc00a6b5b03b73e0d9a165da7757fcd0830f8bd"} Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.777710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb6xq\" (UniqueName: \"kubernetes.io/projected/e70b8e17-5f05-452a-9216-7593143eebae-kube-api-access-tb6xq\") pod \"migrator-59844c95c7-bfg4d\" (UID: \"e70b8e17-5f05-452a-9216-7593143eebae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.778635 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.789571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.790327 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnj69\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-kube-api-access-jnj69\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.790511 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.801946 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.301887927 +0000 UTC m=+150.992594201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.823375 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.839140 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vn9j\" (UniqueName: \"kubernetes.io/projected/635cd233-be60-44f6-b899-1d283e383a5f-kube-api-access-7vn9j\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.840742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.850364 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.853117 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.858099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndds5\" (UniqueName: \"kubernetes.io/projected/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-kube-api-access-ndds5\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.865575 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.875741 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7vc\" (UniqueName: \"kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.877322 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb2e8f4d-c66b-4476-90fe-925010e7e22e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.901630 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c678179e-9aa8-4246-88c7-d0b23452615e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.901555 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.904907 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.905108 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.905655 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.405626882 +0000 UTC m=+151.096333156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.909730 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.930738 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mrnp9"] Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.934601 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gw4z7"] Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.941156 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: W0121 15:28:38.990351 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03c04a1d_2207_466b_8732_7e90b2abd45a.slice/crio-71ac4400e201db5fe64ff367bde7dd880c3592d0440d726943033927c193e79b WatchSource:0}: Error finding container 71ac4400e201db5fe64ff367bde7dd880c3592d0440d726943033927c193e79b: Status 404 returned error can't find the container with id 71ac4400e201db5fe64ff367bde7dd880c3592d0440d726943033927c193e79b Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.000034 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.007033 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.007392 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.507378224 +0000 UTC m=+151.198084498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.007507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jbgcq"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.020995 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.026878 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.044941 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.054939 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4zjzq"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.061084 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qqgkc"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.101544 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.104728 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.107795 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.108164 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.60813856 +0000 UTC m=+151.298844824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.116445 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.141861 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.152098 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vdvrk"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.161260 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.172720 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.217832 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g47s4"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.219537 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.219979 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.719965792 +0000 UTC m=+151.410672056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.301959 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k4fwk"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.321558 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.321980 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.82196062 +0000 UTC m=+151.512666884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.322022 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.322345 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.822336531 +0000 UTC m=+151.513042805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.328170 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.332869 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.336307 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xfwnt"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.414269 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wj45p"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.427042 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.427549 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.927525535 +0000 UTC m=+151.618231799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.437660 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.451097 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.465831 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g"] Jan 21 15:28:39 crc kubenswrapper[4739]: W0121 15:28:39.504618 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41a5775c_2a4c_43f6_869c_9fb214de2806.slice/crio-8150525a8fc7e5333c4701ab43708b0d3ff3b1bcce0562d4bf59c0e6567b545b WatchSource:0}: Error finding container 8150525a8fc7e5333c4701ab43708b0d3ff3b1bcce0562d4bf59c0e6567b545b: Status 404 returned error can't find the container with id 8150525a8fc7e5333c4701ab43708b0d3ff3b1bcce0562d4bf59c0e6567b545b Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.537082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.537891 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.037875598 +0000 UTC m=+151.728581872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.555739 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd"] Jan 21 15:28:39 crc kubenswrapper[4739]: W0121 15:28:39.593395 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59bd4039_f143_418b_94d6_8fa9d3db77f5.slice/crio-8ccd87b5a9e16d51a11ff01bbbe8b4473856ca18524538de3332f2c8b0ee65c3 WatchSource:0}: Error finding container 8ccd87b5a9e16d51a11ff01bbbe8b4473856ca18524538de3332f2c8b0ee65c3: Status 404 returned error can't find the container with id 8ccd87b5a9e16d51a11ff01bbbe8b4473856ca18524538de3332f2c8b0ee65c3 Jan 21 15:28:39 crc kubenswrapper[4739]: W0121 15:28:39.594520 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77b5b7f5_050a_4013_9d21_fdfae7128b21.slice/crio-7caeb9c8a762471729921410f4ce365d87374adde0d32c0e901141224443ba28 WatchSource:0}: Error finding container 7caeb9c8a762471729921410f4ce365d87374adde0d32c0e901141224443ba28: Status 404 returned error can't find the container with id 7caeb9c8a762471729921410f4ce365d87374adde0d32c0e901141224443ba28 Jan 21 15:28:39 crc kubenswrapper[4739]: W0121 15:28:39.604444 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf99aadf5_6fdc_42b5_937c_4792f24882ce.slice/crio-2f936d1248f6c08ae294d621fbf7d2bc012cb37926fe1aa7c6b0dafbdeef463a WatchSource:0}: Error finding container 2f936d1248f6c08ae294d621fbf7d2bc012cb37926fe1aa7c6b0dafbdeef463a: Status 404 returned error can't find the container with id 2f936d1248f6c08ae294d621fbf7d2bc012cb37926fe1aa7c6b0dafbdeef463a Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.612007 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p994f"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.642864 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.643488 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.143471983 +0000 UTC m=+151.834178247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.658688 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xg9nx"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.665423 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.668531 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.669921 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d"] Jan 21 15:28:39 crc kubenswrapper[4739]: W0121 15:28:39.669860 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode389a6f6_d97e_4ec0_a35f_a8c0e7d19669.slice/crio-3d5189a33641d1a61b46084b4b0f833db71961b7e3dbb10179e9773fffde6ac9 WatchSource:0}: Error finding container 3d5189a33641d1a61b46084b4b0f833db71961b7e3dbb10179e9773fffde6ac9: Status 404 returned error can't find the container with id 3d5189a33641d1a61b46084b4b0f833db71961b7e3dbb10179e9773fffde6ac9 Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.756189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.756547 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.256536839 +0000 UTC m=+151.947243103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.761884 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-796x7"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.814194 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" event={"ID":"7b7d9bcd-b091-4811-9196-cc6c20bab78c","Type":"ContainerStarted","Data":"3b8d819a8b8d79555feca5e9132f2ac6dfa1620711711f9ccd7d3ede2c4eeb1b"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.845547 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" podStartSLOduration=127.845532468 podStartE2EDuration="2m7.845532468s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:39.810063726 +0000 UTC m=+151.500769990" watchObservedRunningTime="2026-01-21 15:28:39.845532468 +0000 UTC m=+151.536238732" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.847210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hm72p" event={"ID":"c3085f19-d556-4022-a16d-13c66c1d57d1","Type":"ContainerStarted","Data":"21745f8c7a031cbd91d0eeb6f093c61a1fa24b6ad379c091c4eceea8d137109f"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.859880 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.860414 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.360397967 +0000 UTC m=+152.051104231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.863368 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" event={"ID":"e4636c77-494f-4cea-84e2-456167b5e771","Type":"ContainerStarted","Data":"01c2bc965f742c15303300d45b0194248b00aaa0b99f54fdb6551133db57141b"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.864183 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" event={"ID":"079963dd-bb7d-472a-8af1-0f5386c5f32b","Type":"ContainerStarted","Data":"3aadf90c5474910a679291b80523847429377b4f5a81aa26f6bad34d6314b964"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.865484 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" event={"ID":"59bd4039-f143-418b-94d6-8fa9d3db77f5","Type":"ContainerStarted","Data":"8ccd87b5a9e16d51a11ff01bbbe8b4473856ca18524538de3332f2c8b0ee65c3"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.866141 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b6f6r" event={"ID":"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74","Type":"ContainerStarted","Data":"3a8882cf407b430ab843c7b0296458050aa0914b1f0016eaa92def189446dcfe"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.866731 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" event={"ID":"b8e31058-907a-4b13-938f-8e2ec989ca0b","Type":"ContainerStarted","Data":"a312274d61cdfef373903e83e3a79f8e6217d316bd6726cff1386794baa06eb2"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.867356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" event={"ID":"93e52f9b-f4a8-41b8-ba57-2dbbe554661f","Type":"ContainerStarted","Data":"219a7242bdd29a9f2d06a6cd8ac8a3b8fd5ee6c737170ed50fc116eb0c67735c"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.867997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" event={"ID":"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664","Type":"ContainerStarted","Data":"fb62da7ae3b55a944b1ae15d6bea54057e42ba711a4565f6eebcd7d4e574a7c3"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.927183 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" event={"ID":"04cf092e-a0db-45c5-a311-f28c1a4a8e1d","Type":"ContainerStarted","Data":"0686fc834e8d1e77bcc746404edb3c9639a8d8c2af73d7bf81fff228bce620d3"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.927414 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" event={"ID":"04cf092e-a0db-45c5-a311-f28c1a4a8e1d","Type":"ContainerStarted","Data":"4ffd6d1e17fa3838b7921c3c13a18dfef225650294f8dde06fdc015bd076168b"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.928219 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.929422 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-gw4z7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.929456 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" podUID="04cf092e-a0db-45c5-a311-f28c1a4a8e1d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.931165 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p994f" event={"ID":"0bdb427a-96c7-4be9-8d54-c0926d447a36","Type":"ContainerStarted","Data":"cc8458876e98dbd5b7131c8eb6810205142c9808ae3bc754702a97a0074acfdd"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.956980 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" event={"ID":"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669","Type":"ContainerStarted","Data":"3d5189a33641d1a61b46084b4b0f833db71961b7e3dbb10179e9773fffde6ac9"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.962007 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.962397 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.462386146 +0000 UTC m=+152.153092410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.979321 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.997176 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" event={"ID":"a82d6ee2-dfeb-42c9-9102-15b80cc3c055","Type":"ContainerStarted","Data":"0797ec5703e54e95d565c3f72eae2eb927cff79ac4d8eb9ae951b8b30e7e3b11"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.999342 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lzrxp"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.005070 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.007553 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.008801 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" event={"ID":"77b5b7f5-050a-4013-9d21-fdfae7128b21","Type":"ContainerStarted","Data":"7caeb9c8a762471729921410f4ce365d87374adde0d32c0e901141224443ba28"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.032539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" event={"ID":"03c04a1d-2207-466b-8732-7e90b2abd45a","Type":"ContainerStarted","Data":"71ac4400e201db5fe64ff367bde7dd880c3592d0440d726943033927c193e79b"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.043801 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" event={"ID":"f99aadf5-6fdc-42b5-937c-4792f24882ce","Type":"ContainerStarted","Data":"2f936d1248f6c08ae294d621fbf7d2bc012cb37926fe1aa7c6b0dafbdeef463a"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.044997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xfwnt" event={"ID":"be284180-78a3-4a18-86b3-37d08ab06390","Type":"ContainerStarted","Data":"5e40aeb0ab1b3858b55fe1256f14dc66926da01cabb8f2f41268eac80f1188be"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.056570 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" event={"ID":"2abd630c-c811-40dd-93e4-84a916d7ea27","Type":"ContainerStarted","Data":"638b6a7b56920a8c6a06d1287706b1b277e1db8a34130228ef39ec793b32f51a"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.056792 4739 csr.go:261] certificate signing request csr-dspkw is approved, waiting to be issued Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.061494 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jcttp" event={"ID":"41a5775c-2a4c-43f6-869c-9fb214de2806","Type":"ContainerStarted","Data":"8150525a8fc7e5333c4701ab43708b0d3ff3b1bcce0562d4bf59c0e6567b545b"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.062517 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.063578 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.563559622 +0000 UTC m=+152.254265886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.067772 4739 csr.go:257] certificate signing request csr-dspkw is issued Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.068381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" event={"ID":"348f800b-2552-4315-9b58-a679d8d8b6f3","Type":"ContainerStarted","Data":"414c589f52cdc090d66ba0bfaca5073d0cc2f057c4f374ec043ab30ad5e7dc94"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.070476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" event={"ID":"e7cd1565-a272-48a7-bc63-b61518f16400","Type":"ContainerStarted","Data":"e4675eee738b63b97090f22c95b85529c72e94712c541ee32f2733019ac82430"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.078153 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" event={"ID":"97e7a4a3-f7f2-4059-8705-20acd838d431","Type":"ContainerStarted","Data":"9bb7cccab08898decd5b54fff23801897274d0344dc3e51ffe1c264160053439"} Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.092836 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf3570d_9cd6_4e26_bb55_023b935f9615.slice/crio-034f44281583a7dffe346bb51465592a2bf0c22d0ea93d800d1143e06db6e1c3 WatchSource:0}: Error finding container 034f44281583a7dffe346bb51465592a2bf0c22d0ea93d800d1143e06db6e1c3: Status 404 returned error can't find the container with id 034f44281583a7dffe346bb51465592a2bf0c22d0ea93d800d1143e06db6e1c3 Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.104676 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3e32932_afd4_4e36_8b07_1c6741c86bbd.slice/crio-91fcbbb04c20db9c58dec144c04ef8a9088528e2374194417a0e4746071605d3 WatchSource:0}: Error finding container 91fcbbb04c20db9c58dec144c04ef8a9088528e2374194417a0e4746071605d3: Status 404 returned error can't find the container with id 91fcbbb04c20db9c58dec144c04ef8a9088528e2374194417a0e4746071605d3 Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.112396 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7"] Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.154262 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa3cda86_5932_40aa_9c01_3f95853884f9.slice/crio-6a3ebf17d97cf4baca643ced356b8d90397183fc4b74cd46e25220fe84c712d7 WatchSource:0}: Error finding container 6a3ebf17d97cf4baca643ced356b8d90397183fc4b74cd46e25220fe84c712d7: Status 404 returned error can't find the container with id 6a3ebf17d97cf4baca643ced356b8d90397183fc4b74cd46e25220fe84c712d7 Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.164423 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.166320 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.66630563 +0000 UTC m=+152.357011884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.258056 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.266362 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.267625 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.767608471 +0000 UTC m=+152.458314735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.274591 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35c2a5bd_ed78_4e28_b942_2aa30b4bb63f.slice/crio-e675759682477895d040b2c453a458b4dff9811738d17e6a8055c3697c52c712 WatchSource:0}: Error finding container e675759682477895d040b2c453a458b4dff9811738d17e6a8055c3697c52c712: Status 404 returned error can't find the container with id e675759682477895d040b2c453a458b4dff9811738d17e6a8055c3697c52c712 Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.352634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.363145 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.375887 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.376319 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.876303609 +0000 UTC m=+152.567009883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.384665 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.479600 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.479923 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" podStartSLOduration=128.479906421 podStartE2EDuration="2m8.479906421s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:40.47912029 +0000 UTC m=+152.169826564" watchObservedRunningTime="2026-01-21 15:28:40.479906421 +0000 UTC m=+152.170612685" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.480157 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.980142138 +0000 UTC m=+152.670848402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.508723 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb2e8f4d_c66b_4476_90fe_925010e7e22e.slice/crio-77c867dcac847e9881e6562347454e8e54af8850fdab8f586503a9e92fc8564c WatchSource:0}: Error finding container 77c867dcac847e9881e6562347454e8e54af8850fdab8f586503a9e92fc8564c: Status 404 returned error can't find the container with id 77c867dcac847e9881e6562347454e8e54af8850fdab8f586503a9e92fc8564c Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.558095 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.583250 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.584055 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.084039197 +0000 UTC m=+152.774745461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.585319 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.605512 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.628461 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw"] Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.678622 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52aa9f8a_6b89_442e_b9a2_5943d96d42fc.slice/crio-28bd2d2a26efb29ff25ede7f2dc314c68fa4e7b51e69d5cd7e1cc95d3bc1de2d WatchSource:0}: Error finding container 28bd2d2a26efb29ff25ede7f2dc314c68fa4e7b51e69d5cd7e1cc95d3bc1de2d: Status 404 returned error can't find the container with id 28bd2d2a26efb29ff25ede7f2dc314c68fa4e7b51e69d5cd7e1cc95d3bc1de2d Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.684280 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.684701 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.184684489 +0000 UTC m=+152.875390753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.788230 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.788752 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.288739383 +0000 UTC m=+152.979445647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.889272 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.889476 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.389450677 +0000 UTC m=+153.080156941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.889663 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.890229 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.390220058 +0000 UTC m=+153.080926322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.992193 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.992391 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.49236746 +0000 UTC m=+153.183073714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.992512 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.992876 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.492861484 +0000 UTC m=+153.183567768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.072434 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 15:23:40 +0000 UTC, rotation deadline is 2026-10-11 07:03:25.970003954 +0000 UTC Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.072473 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6303h34m44.897533153s for next certificate rotation Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.105798 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.106187 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.606169616 +0000 UTC m=+153.296875880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.120308 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jcttp" event={"ID":"41a5775c-2a4c-43f6-869c-9fb214de2806","Type":"ContainerStarted","Data":"8795ace6cd95aa25e1438b7d0a1c204d25e02eecd8da891f019bf9b132071e4c"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.121830 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" event={"ID":"c678179e-9aa8-4246-88c7-d0b23452615e","Type":"ContainerStarted","Data":"6f3c911fd326a71e42a1d6bd2bacdd7037c4a309ee09b3784ceb59643d5cd92f"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.122870 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" event={"ID":"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f","Type":"ContainerStarted","Data":"e675759682477895d040b2c453a458b4dff9811738d17e6a8055c3697c52c712"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.123633 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" event={"ID":"c3e32932-afd4-4e36-8b07-1c6741c86bbd","Type":"ContainerStarted","Data":"91fcbbb04c20db9c58dec144c04ef8a9088528e2374194417a0e4746071605d3"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.124436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" event={"ID":"e70b8e17-5f05-452a-9216-7593143eebae","Type":"ContainerStarted","Data":"1340735dc90dd89f835d06fae9a3f3c7713a0bc83b5137a395d2d3b5551a99ad"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.125322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" event={"ID":"1aac4099-92f1-43a7-96e1-50d45566cf54","Type":"ContainerStarted","Data":"39d103b1745e99501bca4604c10f6ec44434d60342c2c09fca8fd4ce921d8c6d"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.128586 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b6f6r" event={"ID":"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74","Type":"ContainerStarted","Data":"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.133310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" event={"ID":"52aa9f8a-6b89-442e-b9a2-5943d96d42fc","Type":"ContainerStarted","Data":"28bd2d2a26efb29ff25ede7f2dc314c68fa4e7b51e69d5cd7e1cc95d3bc1de2d"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.135540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" event={"ID":"e1f7a893-ca61-4fee-ad9d-d5c779092226","Type":"ContainerStarted","Data":"5fee120e30210bc900e1c192d0f436729e94475c2b16e6d6bf3d490e4f53bf47"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.137643 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" event={"ID":"eb2e8f4d-c66b-4476-90fe-925010e7e22e","Type":"ContainerStarted","Data":"77c867dcac847e9881e6562347454e8e54af8850fdab8f586503a9e92fc8564c"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.140307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" event={"ID":"635cd233-be60-44f6-b899-1d283e383a5f","Type":"ContainerStarted","Data":"b80b3b000d3019f617a5e66df91e774abcb285355201e19045d42df8b4ea32c9"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.141423 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" event={"ID":"7b7d9bcd-b091-4811-9196-cc6c20bab78c","Type":"ContainerStarted","Data":"3e23bc11de57f95bb84435dcf762f93674cd34e94f04992551ab5e6ea922199d"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.142316 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.144037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" event={"ID":"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82","Type":"ContainerStarted","Data":"669b0a8174da4dd5e4d3039ec248664951fc3f557382aac100b894eaf461f24d"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.153088 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" event={"ID":"dbf3570d-9cd6-4e26-bb55-023b935f9615","Type":"ContainerStarted","Data":"034f44281583a7dffe346bb51465592a2bf0c22d0ea93d800d1143e06db6e1c3"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.155687 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-b6f6r" podStartSLOduration=129.155668035 podStartE2EDuration="2m9.155668035s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.146135109 +0000 UTC m=+152.836841403" watchObservedRunningTime="2026-01-21 15:28:41.155668035 +0000 UTC m=+152.846374299" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.160425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" event={"ID":"4d3373de-f525-4c47-8519-679e983cc0ba","Type":"ContainerStarted","Data":"d0cf6c72b2d0a5604e83e07d4ba08bd12eb5a76c4c262644b3fe01f62929c752"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.162200 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" event={"ID":"2abd630c-c811-40dd-93e4-84a916d7ea27","Type":"ContainerStarted","Data":"a777a86d38b7faaa99cbc4ee31534bacb87ccdf6f63317683ce67c7ecd01a8f9"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.164627 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" podStartSLOduration=128.164613185 podStartE2EDuration="2m8.164613185s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.163704061 +0000 UTC m=+152.854410325" watchObservedRunningTime="2026-01-21 15:28:41.164613185 +0000 UTC m=+152.855319449" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.167858 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" event={"ID":"8a227bd1-9590-4abe-9b62-3e3dc7b537c1","Type":"ContainerStarted","Data":"e7f90a4a156c4791d43e50f63871bf0db885480b9b2d6f3074942567e4b12032"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.177387 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" event={"ID":"03c04a1d-2207-466b-8732-7e90b2abd45a","Type":"ContainerStarted","Data":"4909ed11916a1a1fb0012f93189a8864b7baa2a98fd62273df47db244631e8e6"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.178881 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" event={"ID":"114b5947-30d6-4a6b-a1c6-1b1f75888037","Type":"ContainerStarted","Data":"27d762c49471e999fcc4a74ca88e65b71174f9da7d91ee7e7c3891a775b43ae4"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.182998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" event={"ID":"f99aadf5-6fdc-42b5-937c-4792f24882ce","Type":"ContainerStarted","Data":"ad7d08d826a0b8397ba463bbf060e3b24b641853508bac41d962bf1915c6f055"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.183283 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.184207 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" event={"ID":"ad0a47df-29cb-4412-af60-0eb3de8e4d00","Type":"ContainerStarted","Data":"a0dd79fbd0830552fc13997f036e965edd5d39797c653aa430440c7fb7a1a584"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.187732 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.188732 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" event={"ID":"93e52f9b-f4a8-41b8-ba57-2dbbe554661f","Type":"ContainerStarted","Data":"04fba51f05ae43a3a732e103d11074778457cbf38d0bc6cd32e7a71e433607c5"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.196923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hm72p" event={"ID":"c3085f19-d556-4022-a16d-13c66c1d57d1","Type":"ContainerStarted","Data":"d8e8ac3fddc474e11cade21d2ac71e72aba197893adb1d8f39962d68b165ac77"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.198910 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-796x7" event={"ID":"82e0a5a3-17e1-4a27-a30a-998b20238558","Type":"ContainerStarted","Data":"4480f40c67713eb4bf63a882d0045ba42d5abd869e662f94dac128bc7b9c99dd"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.209688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.211462 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" podStartSLOduration=129.211447083 podStartE2EDuration="2m9.211447083s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.193193833 +0000 UTC m=+152.883900097" watchObservedRunningTime="2026-01-21 15:28:41.211447083 +0000 UTC m=+152.902153347" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.213792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xg9nx" event={"ID":"61310358-52da-4a4b-bcfd-4f68340d64c3","Type":"ContainerStarted","Data":"988c293d05487e414e3a7834d56e5a23899f4ae72cabf77d465f471a42eb3820"} Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.213940 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.713922569 +0000 UTC m=+153.404628833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.222634 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" event={"ID":"b8e31058-907a-4b13-938f-8e2ec989ca0b","Type":"ContainerStarted","Data":"48c4adfcda5ed3b2074a0713337352e71f9610f5fc4f64e3cdd6d5cdafb29426"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.223155 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.224727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" event={"ID":"aa3cda86-5932-40aa-9c01-3f95853884f9","Type":"ContainerStarted","Data":"6a3ebf17d97cf4baca643ced356b8d90397183fc4b74cd46e25220fe84c712d7"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.236869 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" podStartSLOduration=128.232800626 podStartE2EDuration="2m8.232800626s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.212421949 +0000 UTC m=+152.903128223" watchObservedRunningTime="2026-01-21 15:28:41.232800626 +0000 UTC m=+152.923506890" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.237845 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hm72p" podStartSLOduration=129.237806981 podStartE2EDuration="2m9.237806981s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.232087697 +0000 UTC m=+152.922793961" watchObservedRunningTime="2026-01-21 15:28:41.237806981 +0000 UTC m=+152.928513245" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.238627 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hbpqz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.238676 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.241570 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.259697 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" podStartSLOduration=129.259681147 podStartE2EDuration="2m9.259681147s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.252535005 +0000 UTC m=+152.943241269" watchObservedRunningTime="2026-01-21 15:28:41.259681147 +0000 UTC m=+152.950387411" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.310859 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.311048 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.811019546 +0000 UTC m=+153.501725810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.311482 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.312729 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.812713971 +0000 UTC m=+153.503420235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.413015 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.413382 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.913363184 +0000 UTC m=+153.604069448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.477751 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.516669 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.517060 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.017044678 +0000 UTC m=+153.707750942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.617933 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.618232 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.118209324 +0000 UTC m=+153.808915588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.618763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.619181 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.1191731 +0000 UTC m=+153.809879364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.720109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.720515 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.220495021 +0000 UTC m=+153.911201295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.779365 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.786724 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:41 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:41 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:41 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.786791 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.821251 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.821913 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.321898893 +0000 UTC m=+154.012605157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.922076 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.922497 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.922542 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.922570 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.922654 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.925603 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.425576067 +0000 UTC m=+154.116282341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.926100 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.932129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.935768 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.936345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.977179 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.977940 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.005721 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.024584 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.024904 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.524891144 +0000 UTC m=+154.215597408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.126340 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.126667 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.626651886 +0000 UTC m=+154.317358150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.227848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.228395 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.728380257 +0000 UTC m=+154.419086521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.266874 4739 generic.go:334] "Generic (PLEG): container finished" podID="079963dd-bb7d-472a-8af1-0f5386c5f32b" containerID="ff3939dbd1b5a229bc2b4f6a3a3eea9cf8b4d697da690b57b7e36b70462633be" exitCode=0 Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.266954 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" event={"ID":"079963dd-bb7d-472a-8af1-0f5386c5f32b","Type":"ContainerDied","Data":"ff3939dbd1b5a229bc2b4f6a3a3eea9cf8b4d697da690b57b7e36b70462633be"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.273876 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" event={"ID":"c3e32932-afd4-4e36-8b07-1c6741c86bbd","Type":"ContainerStarted","Data":"7438c4bc6be357a40c115ae6d0bb1e2bb400b651acbbf189cfa238f370e6c821"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.274729 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" event={"ID":"635cd233-be60-44f6-b899-1d283e383a5f","Type":"ContainerStarted","Data":"a5ec400f39caf5b0167671bda3eb22f25c853e2a2631d6ae9d9972be77e2c805"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.275502 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" event={"ID":"e4636c77-494f-4cea-84e2-456167b5e771","Type":"ContainerStarted","Data":"125b51ad1eaf304b6c9aa5114cd7dca241eeed7690fce1ac15efc358494f4ac5"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.276275 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" event={"ID":"a82d6ee2-dfeb-42c9-9102-15b80cc3c055","Type":"ContainerStarted","Data":"6ed95e5a73be73df1c1c1658043806f52b956c0f9511221fe57e1834528eb5c2"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.277009 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.281898 4739 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vdvrk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.281946 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.283411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" event={"ID":"77b5b7f5-050a-4013-9d21-fdfae7128b21","Type":"ContainerStarted","Data":"eb6fea3f6e445b19ac1c7408cdb05319e93ceb03f6022f140968c61fd8ec1337"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.287314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" event={"ID":"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664","Type":"ContainerStarted","Data":"71df87496234a55dc5b65f2f1575773f36992c8d9cd301f003289328473d82b9"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.299172 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" event={"ID":"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669","Type":"ContainerStarted","Data":"de62e2d03f77c44fca3ae07db1cbb7766c8c48037a934a63002808d4abcf5a0e"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.344501 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.344959 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" podStartSLOduration=130.344942846 podStartE2EDuration="2m10.344942846s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.344282729 +0000 UTC m=+154.034989013" watchObservedRunningTime="2026-01-21 15:28:42.344942846 +0000 UTC m=+154.035649110" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.345277 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.845259145 +0000 UTC m=+154.535965409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.345420 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-796x7" event={"ID":"82e0a5a3-17e1-4a27-a30a-998b20238558","Type":"ContainerStarted","Data":"8b5bd42b9fb5ccf6e6abb21464e0e3297182b3feb747c7b0abafeb9dea0cfa3c"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.385097 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" event={"ID":"97e7a4a3-f7f2-4059-8705-20acd838d431","Type":"ContainerStarted","Data":"adb706ef18d7212dd5a0ef35b71f7176b55db16d154164d8071374ec1855c724"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.418238 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xfwnt" event={"ID":"be284180-78a3-4a18-86b3-37d08ab06390","Type":"ContainerStarted","Data":"112daf0ab06740349629c5ae3b4f915f1abf7135f74513ddcf4f6391b0e53f69"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.420828 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.422422 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.422454 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.447486 4739 generic.go:334] "Generic (PLEG): container finished" podID="93e52f9b-f4a8-41b8-ba57-2dbbe554661f" containerID="04fba51f05ae43a3a732e103d11074778457cbf38d0bc6cd32e7a71e433607c5" exitCode=0 Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.447621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" event={"ID":"93e52f9b-f4a8-41b8-ba57-2dbbe554661f","Type":"ContainerDied","Data":"04fba51f05ae43a3a732e103d11074778457cbf38d0bc6cd32e7a71e433607c5"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.450654 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.452929 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.952914076 +0000 UTC m=+154.643620340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.475861 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" event={"ID":"ad0a47df-29cb-4412-af60-0eb3de8e4d00","Type":"ContainerStarted","Data":"7c62da2caa2e74db379a5d6a043877094c6774861680aa20c5ea0470090cbb60"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.524278 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" podStartSLOduration=130.524259272 podStartE2EDuration="2m10.524259272s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.422567291 +0000 UTC m=+154.113273555" watchObservedRunningTime="2026-01-21 15:28:42.524259272 +0000 UTC m=+154.214965536" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.553859 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.554084 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.054041091 +0000 UTC m=+154.744747355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.554195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.555266 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.055253724 +0000 UTC m=+154.745959988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.583388 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" event={"ID":"348f800b-2552-4315-9b58-a679d8d8b6f3","Type":"ContainerStarted","Data":"10b59dffaf425dc09b483ce89e2af9050a3475d04b3c1eb82cd6b87ba2948da6"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.595263 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" podStartSLOduration=130.595242987 podStartE2EDuration="2m10.595242987s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.526208074 +0000 UTC m=+154.216914338" watchObservedRunningTime="2026-01-21 15:28:42.595242987 +0000 UTC m=+154.285949251" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.595707 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-796x7" podStartSLOduration=7.59570017 podStartE2EDuration="7.59570017s" podCreationTimestamp="2026-01-21 15:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.593216633 +0000 UTC m=+154.283922897" watchObservedRunningTime="2026-01-21 15:28:42.59570017 +0000 UTC m=+154.286406434" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.657388 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.658098 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.158079604 +0000 UTC m=+154.848785868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.711566 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" event={"ID":"4d3373de-f525-4c47-8519-679e983cc0ba","Type":"ContainerStarted","Data":"b14e75d17e934a457ed88458029c4f9e6eb5d20843d316300f1dbdff321005ed"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.767498 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" podStartSLOduration=130.767474201 podStartE2EDuration="2m10.767474201s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.764758468 +0000 UTC m=+154.455464732" watchObservedRunningTime="2026-01-21 15:28:42.767474201 +0000 UTC m=+154.458180465" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.767802 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xfwnt" podStartSLOduration=130.76779608 podStartE2EDuration="2m10.76779608s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.681836052 +0000 UTC m=+154.372542316" watchObservedRunningTime="2026-01-21 15:28:42.76779608 +0000 UTC m=+154.458502344" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.769240 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.770484 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.270469282 +0000 UTC m=+154.961175546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.789785 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:42 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:42 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:42 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.790168 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.869535 4739 generic.go:334] "Generic (PLEG): container finished" podID="e7cd1565-a272-48a7-bc63-b61518f16400" containerID="775610ed5643952b0ccb82e4c8e92928f9f9db7771f53f7cb55200d9922288ba" exitCode=0 Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.872465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.873430 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.373062167 +0000 UTC m=+155.063768431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.873535 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.879038 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.379013976 +0000 UTC m=+155.069720250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.886456 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" event={"ID":"e7cd1565-a272-48a7-bc63-b61518f16400","Type":"ContainerDied","Data":"775610ed5643952b0ccb82e4c8e92928f9f9db7771f53f7cb55200d9922288ba"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.902103 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" event={"ID":"eb2e8f4d-c66b-4476-90fe-925010e7e22e","Type":"ContainerStarted","Data":"15027af3bbdd6f85b2148be402c514744eb31219e5e74ca957ea3895a941ffd3"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.966442 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xg9nx" event={"ID":"61310358-52da-4a4b-bcfd-4f68340d64c3","Type":"ContainerStarted","Data":"2f9b004a1223630b8a88331bfde30a19ca2afe90fc64d177e811f576225d81cb"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.975140 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" podStartSLOduration=130.975117077 podStartE2EDuration="2m10.975117077s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.973106852 +0000 UTC m=+154.663813116" watchObservedRunningTime="2026-01-21 15:28:42.975117077 +0000 UTC m=+154.665823341" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.981972 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.982324 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hbpqz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.982432 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.983292 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.483275315 +0000 UTC m=+155.173981579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.027468 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jcttp" podStartSLOduration=8.027440671 podStartE2EDuration="8.027440671s" podCreationTimestamp="2026-01-21 15:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:43.017894195 +0000 UTC m=+154.708600479" watchObservedRunningTime="2026-01-21 15:28:43.027440671 +0000 UTC m=+154.718146945" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.084962 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.102776 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.602762504 +0000 UTC m=+155.293468768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.186465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.187503 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.687487698 +0000 UTC m=+155.378193962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.218495 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.219520 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.242115 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.245454 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.290485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.290849 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.790835943 +0000 UTC m=+155.481542207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.387284 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.388678 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.391696 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.392795 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.393045 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr9tt\" (UniqueName: \"kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.393081 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.393103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.393237 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.893225222 +0000 UTC m=+155.583931486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.420771 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494085 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr9tt\" (UniqueName: \"kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2v47\" (UniqueName: \"kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494178 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494306 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494343 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.495106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.495364 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.495665 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.995650172 +0000 UTC m=+155.686356436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.540967 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr9tt\" (UniqueName: \"kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.577505 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.582589 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.586741 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.597125 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.597337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.597369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.597393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2v47\" (UniqueName: \"kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.597686 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.097672182 +0000 UTC m=+155.788378446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.598026 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.598221 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.599490 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.628590 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2v47\" (UniqueName: \"kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.700496 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.700554 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.700575 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.700591 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2pd4\" (UniqueName: \"kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.700847 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.200836302 +0000 UTC m=+155.891542566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.760130 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.773244 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.774142 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.783995 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:43 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:43 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:43 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.784046 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.802399 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.802588 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.802616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.802635 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2pd4\" (UniqueName: \"kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.802742 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.802945 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.302931703 +0000 UTC m=+155.993637967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.803100 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.803179 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.883189 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2pd4\" (UniqueName: \"kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.906860 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.906939 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkvh\" (UniqueName: \"kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.907003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.907118 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.921632 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.42161377 +0000 UTC m=+156.112320034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.927185 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.005016 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" event={"ID":"e4636c77-494f-4cea-84e2-456167b5e771","Type":"ContainerStarted","Data":"e3340b3e0c0235376e729e5ad6ac71eb9aa1a717d654adad262f9dfb84a68b0e"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.021606 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.021862 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.021892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkvh\" (UniqueName: \"kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.021921 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.022624 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.022781 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.522752805 +0000 UTC m=+156.213459069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.022884 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.026172 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" event={"ID":"e1f7a893-ca61-4fee-ad9d-d5c779092226","Type":"ContainerStarted","Data":"771ed276f33ef6e1e377c606ac3caaa98166aa3f7b4622c20c1328ae1d0436d8"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.033120 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" podStartSLOduration=132.033102953 podStartE2EDuration="2m12.033102953s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.032267621 +0000 UTC m=+155.722973885" watchObservedRunningTime="2026-01-21 15:28:44.033102953 +0000 UTC m=+155.723809217" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.038267 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" event={"ID":"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82","Type":"ContainerStarted","Data":"c04c306e06502e6ea32238cef7b15918d7d3f173348df40f7c76c378bc89413b"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.072206 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" event={"ID":"aa3cda86-5932-40aa-9c01-3f95853884f9","Type":"ContainerStarted","Data":"8fb9e4f706b05872c83791dd900ac7c318172518949db70dd185a560706102d3"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.085730 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkvh\" (UniqueName: \"kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.093315 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xg9nx" event={"ID":"61310358-52da-4a4b-bcfd-4f68340d64c3","Type":"ContainerStarted","Data":"ae8be6ae7f6044ed945d4f6ed47d053cf862aa5d536f27c576fe698edc26adb8"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.093894 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.117231 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" event={"ID":"1aac4099-92f1-43a7-96e1-50d45566cf54","Type":"ContainerStarted","Data":"5ad4bb35d6311c3aa3bed4bc5cef61cbb9fb6fa0ae39cdf622663c4df942e514"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.123607 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.125546 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.625534004 +0000 UTC m=+156.316240268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.126409 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.129283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7f99c4af23ff157f87cfac05013be16a9a00ab592caa97b4331e1615373c5c3d"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.156030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" event={"ID":"2abd630c-c811-40dd-93e4-84a916d7ea27","Type":"ContainerStarted","Data":"52bf8dcb46b197995b65ab3e0e8a26c184ad18bc49393261a194e2215ad4041e"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.167451 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" podStartSLOduration=132.16743762 podStartE2EDuration="2m12.16743762s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.126755097 +0000 UTC m=+155.817461361" watchObservedRunningTime="2026-01-21 15:28:44.16743762 +0000 UTC m=+155.858143884" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.168094 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" podStartSLOduration=131.168088268 podStartE2EDuration="2m11.168088268s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.166250368 +0000 UTC m=+155.856956632" watchObservedRunningTime="2026-01-21 15:28:44.168088268 +0000 UTC m=+155.858794532" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.172872 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" event={"ID":"59bd4039-f143-418b-94d6-8fa9d3db77f5","Type":"ContainerStarted","Data":"647493b279a34c89c925c28d38dc7d853a97911c37f25e893aad0b40a3a515ac"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.194519 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" event={"ID":"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f","Type":"ContainerStarted","Data":"d5881ecf0f4c3f2db3ac604bf5b160a90f723d4f5f224f6693d8885f51a73e45"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.218549 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" podStartSLOduration=132.218533152 podStartE2EDuration="2m12.218533152s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.217699079 +0000 UTC m=+155.908405343" watchObservedRunningTime="2026-01-21 15:28:44.218533152 +0000 UTC m=+155.909239416" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.224465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.225548 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.72553227 +0000 UTC m=+156.416238534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.233563 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" event={"ID":"93e52f9b-f4a8-41b8-ba57-2dbbe554661f","Type":"ContainerStarted","Data":"47d4cd1e6d40aef0b450dd9f3300ef399be261e53af651e121b4f33c36a2b809"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.234612 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.245449 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xg9nx" podStartSLOduration=9.245434444 podStartE2EDuration="9.245434444s" podCreationTimestamp="2026-01-21 15:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.243967725 +0000 UTC m=+155.934673989" watchObservedRunningTime="2026-01-21 15:28:44.245434444 +0000 UTC m=+155.936140708" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.255751 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" event={"ID":"c3e32932-afd4-4e36-8b07-1c6741c86bbd","Type":"ContainerStarted","Data":"6d671eaaf6517d3955bbe736751d0b033e805f07b9c048598b6a375506a6730b"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.256468 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.267124 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" podStartSLOduration=132.267099855 podStartE2EDuration="2m12.267099855s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.26462392 +0000 UTC m=+155.955330184" watchObservedRunningTime="2026-01-21 15:28:44.267099855 +0000 UTC m=+155.957806119" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.275799 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p994f" event={"ID":"0bdb427a-96c7-4be9-8d54-c0926d447a36","Type":"ContainerStarted","Data":"a908a84ae0cefc6a9b3ba6c636d8b8332265268fd11ce86f86938ec30c5d1c23"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.299538 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" event={"ID":"97e7a4a3-f7f2-4059-8705-20acd838d431","Type":"ContainerStarted","Data":"481c0cea9821c1e840c450d8171516c1a8c20869418c230f7952845920fb7667"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.301018 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" event={"ID":"52aa9f8a-6b89-442e-b9a2-5943d96d42fc","Type":"ContainerStarted","Data":"0cb48e6710064d93a284af9226f4a142c14287699fbb7621f68f135f43e37673"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.324660 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" podStartSLOduration=132.32464578 podStartE2EDuration="2m12.32464578s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.302021903 +0000 UTC m=+155.992728187" watchObservedRunningTime="2026-01-21 15:28:44.32464578 +0000 UTC m=+156.015352044" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.326426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.333132 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.833117818 +0000 UTC m=+156.523824192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.351303 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" event={"ID":"114b5947-30d6-4a6b-a1c6-1b1f75888037","Type":"ContainerStarted","Data":"0ba0a662f5bb17d4898a50dbc00444c9bcbdee1bc88f11ae8f930deaa25c41fb"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.352274 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.361158 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" podStartSLOduration=132.3611235 podStartE2EDuration="2m12.3611235s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.360608856 +0000 UTC m=+156.051315120" watchObservedRunningTime="2026-01-21 15:28:44.3611235 +0000 UTC m=+156.051829764" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.361453 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" podStartSLOduration=132.361432689 podStartE2EDuration="2m12.361432689s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.332542433 +0000 UTC m=+156.023248687" watchObservedRunningTime="2026-01-21 15:28:44.361432689 +0000 UTC m=+156.052138943" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.383045 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" event={"ID":"c678179e-9aa8-4246-88c7-d0b23452615e","Type":"ContainerStarted","Data":"a7c7c0666c38b93c5d3c72f14e93ee98feab143474c1d606715b8c7add594d78"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.396901 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" podStartSLOduration=131.39688019 podStartE2EDuration="2m11.39688019s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.390728245 +0000 UTC m=+156.081434509" watchObservedRunningTime="2026-01-21 15:28:44.39688019 +0000 UTC m=+156.087586454" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.398929 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" event={"ID":"635cd233-be60-44f6-b899-1d283e383a5f","Type":"ContainerStarted","Data":"1d9c627cad8a2be1a70fae5b8b00d762ece63941b61bf98701049dc535e3623b"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.427639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.428467 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" podStartSLOduration=132.428447158 podStartE2EDuration="2m12.428447158s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.419654191 +0000 UTC m=+156.110360455" watchObservedRunningTime="2026-01-21 15:28:44.428447158 +0000 UTC m=+156.119153422" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.429302 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.92928385 +0000 UTC m=+156.619990124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.440116 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" event={"ID":"ad0a47df-29cb-4412-af60-0eb3de8e4d00","Type":"ContainerStarted","Data":"278b2d43633474ea64dd1aef0f8b0497b26adffb1fd042e1bc5f5fd41c3b48a8"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.449061 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" podStartSLOduration=131.449048141 podStartE2EDuration="2m11.449048141s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.44678106 +0000 UTC m=+156.137487324" watchObservedRunningTime="2026-01-21 15:28:44.449048141 +0000 UTC m=+156.139754405" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.456857 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" event={"ID":"e70b8e17-5f05-452a-9216-7593143eebae","Type":"ContainerStarted","Data":"a2ea388caebdc4dad57ba2b92825e7ae3c5e34167db856946c50867d83d22d15"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.456895 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" event={"ID":"e70b8e17-5f05-452a-9216-7593143eebae","Type":"ContainerStarted","Data":"5e6967827c20509cd1fcd580e27ff80eb28df064e73103f61fcd00d9a36d3a79"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.469865 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" event={"ID":"dbf3570d-9cd6-4e26-bb55-023b935f9615","Type":"ContainerStarted","Data":"354f62e5fa1035512b9a0102ab0e4ab2c22d3de280542d0cdca1941aa0faf681"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.470229 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.476318 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" podStartSLOduration=131.476302683 podStartE2EDuration="2m11.476302683s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.475223194 +0000 UTC m=+156.165929458" watchObservedRunningTime="2026-01-21 15:28:44.476302683 +0000 UTC m=+156.167008947" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.484546 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.484724 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" event={"ID":"4d3373de-f525-4c47-8519-679e983cc0ba","Type":"ContainerStarted","Data":"b04fdbe9c321076eed796e5055a95977bfbee25716fdf15e6417da3218c689c7"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.510643 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3cd1041dc63e0d75c17539df6ef2dd300ddf5739b6924dfb12bd26d4a300a654"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.510688 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ec0863307254b3dd81790a11d97ffebb37183121ed85890f2eb803da49e5a1e9"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.519667 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" podStartSLOduration=132.519651577 podStartE2EDuration="2m12.519651577s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.515543127 +0000 UTC m=+156.206249391" watchObservedRunningTime="2026-01-21 15:28:44.519651577 +0000 UTC m=+156.210357841" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.522791 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fe38b39eb3a0a1163381c79d496ebe21fe90c97159285d73965775a981f9e354"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.522855 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"87d5aafdfce401363fc36c03f4fb02bf474baef4cd5dceb2126a32f152d5a35c"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.523050 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.525145 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" event={"ID":"8a227bd1-9590-4abe-9b62-3e3dc7b537c1","Type":"ContainerStarted","Data":"03b3a307c9f7c3be1cecfbcceef163690da8ba26787d4d0059149c1fb749cd73"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.525178 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.528009 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.528041 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.529080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.530929 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.030916849 +0000 UTC m=+156.721623113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.543547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.598108 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" podStartSLOduration=132.598092613 podStartE2EDuration="2m12.598092613s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.560568175 +0000 UTC m=+156.251274439" watchObservedRunningTime="2026-01-21 15:28:44.598092613 +0000 UTC m=+156.288798877" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.624513 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" podStartSLOduration=132.624499122 podStartE2EDuration="2m12.624499122s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.59874841 +0000 UTC m=+156.289454684" watchObservedRunningTime="2026-01-21 15:28:44.624499122 +0000 UTC m=+156.315205386" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.625160 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" podStartSLOduration=132.625154659 podStartE2EDuration="2m12.625154659s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.62332097 +0000 UTC m=+156.314027234" watchObservedRunningTime="2026-01-21 15:28:44.625154659 +0000 UTC m=+156.315860923" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.648623 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.649702 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.675196 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.175178503 +0000 UTC m=+156.865884757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.726090 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" podStartSLOduration=132.726069249 podStartE2EDuration="2m12.726069249s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.72536765 +0000 UTC m=+156.416073934" watchObservedRunningTime="2026-01-21 15:28:44.726069249 +0000 UTC m=+156.416775513" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.728672 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" podStartSLOduration=131.728645448 podStartE2EDuration="2m11.728645448s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.695030546 +0000 UTC m=+156.385736820" watchObservedRunningTime="2026-01-21 15:28:44.728645448 +0000 UTC m=+156.419351722" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.779956 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.780473 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.280458009 +0000 UTC m=+156.971164273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.780705 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:44 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:44 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:44 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.780739 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.788799 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" podStartSLOduration=132.788776913 podStartE2EDuration="2m12.788776913s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.785621228 +0000 UTC m=+156.476327492" watchObservedRunningTime="2026-01-21 15:28:44.788776913 +0000 UTC m=+156.479483177" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.848132 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.869340 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.882110 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.882276 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.382250903 +0000 UTC m=+157.072957167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.882453 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.882769 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.382759286 +0000 UTC m=+157.073465550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: W0121 15:28:44.884236 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f24f8c8_f70f_47a4_998b_72b7ba0875cb.slice/crio-8a9663b236e38b60bd5d612e28718624dcba862dff16d6f69798b2a18a2a92ac WatchSource:0}: Error finding container 8a9663b236e38b60bd5d612e28718624dcba862dff16d6f69798b2a18a2a92ac: Status 404 returned error can't find the container with id 8a9663b236e38b60bd5d612e28718624dcba862dff16d6f69798b2a18a2a92ac Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.926946 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" podStartSLOduration=132.926925712 podStartE2EDuration="2m12.926925712s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.907689445 +0000 UTC m=+156.598395699" watchObservedRunningTime="2026-01-21 15:28:44.926925712 +0000 UTC m=+156.617631986" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.928108 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" podStartSLOduration=132.928101233 podStartE2EDuration="2m12.928101233s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.924860056 +0000 UTC m=+156.615566320" watchObservedRunningTime="2026-01-21 15:28:44.928101233 +0000 UTC m=+156.618807497" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.979923 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.983209 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.983625 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.483604984 +0000 UTC m=+157.174311248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.084881 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.085165 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.585154041 +0000 UTC m=+157.275860305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.161684 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.163224 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.169078 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.170738 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.186141 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.186252 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.686232254 +0000 UTC m=+157.376938518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.186628 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.186928 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.686920482 +0000 UTC m=+157.377626746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.288401 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.288621 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.788591533 +0000 UTC m=+157.479297797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.288680 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.288877 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5fwc\" (UniqueName: \"kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.288919 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.289006 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.289307 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.789292141 +0000 UTC m=+157.479998405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.352379 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j9qnr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.352443 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" podUID="114b5947-30d6-4a6b-a1c6-1b1f75888037" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.390046 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.390259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.390303 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.390365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5fwc\" (UniqueName: \"kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.390613 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.890592851 +0000 UTC m=+157.581299115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.391191 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.391456 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.412835 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5fwc\" (UniqueName: \"kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.478268 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.491909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.492373 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.992353893 +0000 UTC m=+157.683060167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.525080 4739 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q7k9s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.525168 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.527391 4739 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vdvrk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.527422 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.530409 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerStarted","Data":"cc670b96dead1450a562f21a646f9e5f756fd0a05781547fb1510f02ab348006"} Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.531321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerStarted","Data":"35c59b7a17a024e316d93c0ddc28b0f3ad5d3ed108d5a24d6ca60b8f080c2d86"} Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.532136 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerStarted","Data":"8a9663b236e38b60bd5d612e28718624dcba862dff16d6f69798b2a18a2a92ac"} Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.533707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" event={"ID":"079963dd-bb7d-472a-8af1-0f5386c5f32b","Type":"ContainerStarted","Data":"74bfb69c160688b5ff27800d0d01f0fdc1f36f6e4078100985b4f399124e56f3"} Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.535242 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" event={"ID":"59bd4039-f143-418b-94d6-8fa9d3db77f5","Type":"ContainerStarted","Data":"79f95c360a7e94a59a01db38dbf447c36e2a2e76898df7f7fe7f18cbafe84f9b"} Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.564154 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v4k"] Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.565512 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.575760 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v4k"] Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.594570 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.094545567 +0000 UTC m=+157.785251841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.594349 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.595014 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.595502 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.095486982 +0000 UTC m=+157.786193256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.697028 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.697172 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.197153942 +0000 UTC m=+157.887860206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.697449 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.697781 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.697804 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g6gn\" (UniqueName: \"kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.698093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.700119 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.200099281 +0000 UTC m=+157.890805545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.780494 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:45 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:45 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:45 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.780563 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.799866 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.800154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.800203 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6gn\" (UniqueName: \"kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.800278 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.800861 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.300792485 +0000 UTC m=+157.991498799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.801296 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.801443 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.841230 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g6gn\" (UniqueName: \"kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.891991 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.901430 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.901729 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.401716805 +0000 UTC m=+158.092423069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.004321 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.004420 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.504404742 +0000 UTC m=+158.195111006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.004728 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.005081 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.50507141 +0000 UTC m=+158.195777684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.106327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.107151 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.607132139 +0000 UTC m=+158.297838403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.208284 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.208621 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.708607525 +0000 UTC m=+158.399313799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.281133 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g47s4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.281191 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" podUID="93e52f9b-f4a8-41b8-ba57-2dbbe554661f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.281529 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g47s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.281549 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" podUID="93e52f9b-f4a8-41b8-ba57-2dbbe554661f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.311485 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.311890 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.811871967 +0000 UTC m=+158.502578241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.345147 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.414592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.414910 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.914896633 +0000 UTC m=+158.605602897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.521482 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.522052 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.02203614 +0000 UTC m=+158.712742404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.541784 4739 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vdvrk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.541860 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.542180 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j9qnr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.542231 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" podUID="114b5947-30d6-4a6b-a1c6-1b1f75888037" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.558616 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerStarted","Data":"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.562871 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" event={"ID":"e7cd1565-a272-48a7-bc63-b61518f16400","Type":"ContainerStarted","Data":"3561443b035229b0ad4fade4d9010170b1939001011bac466caabf71ec33696b"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.567539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"02201d794e34c6a0aa329c91f414c8c29bc2dfc2ce73abbad7ecfc1c6174bad4"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.573379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerStarted","Data":"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.594122 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerStarted","Data":"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.594166 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerStarted","Data":"80f37abb660ca7973267f6b03eb2b00ab62858a4ef5d1dbd02c60af6327d0edf"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.596140 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g47s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.596180 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" podUID="93e52f9b-f4a8-41b8-ba57-2dbbe554661f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.624732 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.625161 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.125147038 +0000 UTC m=+158.815853302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.658073 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.659577 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.725802 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.725917 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.726894 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.22687864 +0000 UTC m=+158.917584904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.781986 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:46 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:46 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:46 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.782049 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.827577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.827978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.828049 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.828084 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2lnw\" (UniqueName: \"kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.828487 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.328472918 +0000 UTC m=+159.019179182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.836681 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.947071 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.947355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.947440 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2lnw\" (UniqueName: \"kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.947480 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.947950 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.948064 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.448045828 +0000 UTC m=+159.138752092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.948318 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.964372 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.972347 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.039515 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.048394 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.048622 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.048658 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6wj4\" (UniqueName: \"kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.048710 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.048729 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.049035 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.549023999 +0000 UTC m=+159.239730253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.116908 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" podStartSLOduration=135.116888052 podStartE2EDuration="2m15.116888052s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:47.038431405 +0000 UTC m=+158.729137669" watchObservedRunningTime="2026-01-21 15:28:47.116888052 +0000 UTC m=+158.807594316" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.143076 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2lnw\" (UniqueName: \"kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.159375 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.159445 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.659424494 +0000 UTC m=+159.350130758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.159665 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.159742 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.159900 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.159933 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6wj4\" (UniqueName: \"kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.160677 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.660663807 +0000 UTC m=+159.351370071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.161290 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.163935 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.239489 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.239949 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.240698 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6wj4\" (UniqueName: \"kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.261419 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.261788 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.761773891 +0000 UTC m=+159.452480155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.318122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.337401 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.363355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.363755 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.863740359 +0000 UTC m=+159.554446623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.464670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.465068 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.965054129 +0000 UTC m=+159.655760393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.527978 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.566483 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.566841 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.066829812 +0000 UTC m=+159.757536066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.573528 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v4k"] Jan 21 15:28:47 crc kubenswrapper[4739]: W0121 15:28:47.584111 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ed3c687_16d6_444b_8964_37ed32442908.slice/crio-38c036115d6050b2dee2a84063aba041580afa29084861cecfd5cc9c6d4207ed WatchSource:0}: Error finding container 38c036115d6050b2dee2a84063aba041580afa29084861cecfd5cc9c6d4207ed: Status 404 returned error can't find the container with id 38c036115d6050b2dee2a84063aba041580afa29084861cecfd5cc9c6d4207ed Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.605377 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.605580 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.605485 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.605777 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.618281 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerID="acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c" exitCode=0 Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.618402 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerDied","Data":"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.619884 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.620159 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerID="7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396" exitCode=0 Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.620222 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerDied","Data":"7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.628286 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5239161-d375-4078-8cbf-95219376f756" containerID="d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422" exitCode=0 Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.628358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerDied","Data":"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.651874 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" event={"ID":"079963dd-bb7d-472a-8af1-0f5386c5f32b","Type":"ContainerStarted","Data":"67017651e3fd51cbb37005cd991e3bce30f393489ee1d0dd41b404342d22c596"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.655943 4739 generic.go:334] "Generic (PLEG): container finished" podID="db025233-2eca-4500-9e3c-67610f3f7a37" containerID="d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961" exitCode=0 Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.656025 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerDied","Data":"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.660360 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerStarted","Data":"353a2791208f5853a1241541e270354e4fc453c8d0c53deec17482b7d7512a0d"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.663192 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerStarted","Data":"38c036115d6050b2dee2a84063aba041580afa29084861cecfd5cc9c6d4207ed"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.672527 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.672888 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.17287406 +0000 UTC m=+159.863580324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.720923 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.723213 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.727975 4739 patch_prober.go:28] interesting pod/console-f9d7485db-b6f6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.728036 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b6f6r" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.733477 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.773592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.776141 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.276127532 +0000 UTC m=+159.966833796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.795013 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:47 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:47 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:47 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.795058 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.876997 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.878440 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.378424709 +0000 UTC m=+160.069130973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.899269 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" podStartSLOduration=135.899250657 podStartE2EDuration="2m15.899250657s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:47.790357194 +0000 UTC m=+159.481063468" watchObservedRunningTime="2026-01-21 15:28:47.899250657 +0000 UTC m=+159.589956921" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.982050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.982601 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.482587035 +0000 UTC m=+160.173293299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.085036 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.085415 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.585396105 +0000 UTC m=+160.276102369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.186730 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.187111 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.687093636 +0000 UTC m=+160.377799900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.287357 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.287698 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.787684637 +0000 UTC m=+160.478390901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.390379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.390709 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.890693922 +0000 UTC m=+160.581400186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.491465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.491858 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.991843489 +0000 UTC m=+160.682549753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.593106 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.593489 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.093472607 +0000 UTC m=+160.784178871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.684228 4739 generic.go:334] "Generic (PLEG): container finished" podID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerID="a4e08ee4d926be7b601171c8e6c10c31fe7ed602595664cb1120197a5812c75c" exitCode=0 Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.684306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerDied","Data":"a4e08ee4d926be7b601171c8e6c10c31fe7ed602595664cb1120197a5812c75c"} Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.693671 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ed3c687-16d6-444b-8964-37ed32442908" containerID="04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8" exitCode=0 Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.693746 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerDied","Data":"04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8"} Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.694390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.694704 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.194690435 +0000 UTC m=+160.885396689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.701791 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p994f" event={"ID":"0bdb427a-96c7-4be9-8d54-c0926d447a36","Type":"ContainerStarted","Data":"81f4070f45ff905a2e448c14f92f2326b0171fa0b1737e4deca85218af2c0620"} Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.762893 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.767465 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.767520 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.768178 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.779535 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.782805 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:48 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:48 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:48 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.782874 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.799206 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.801383 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.301367159 +0000 UTC m=+160.992073423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.808436 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.808689 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.824574 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.900945 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.901153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.901177 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.901398 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.401375074 +0000 UTC m=+161.092081328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.004972 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.005268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.005288 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.005368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.005374 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.505356826 +0000 UTC m=+161.196063170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.034097 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.059997 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.107390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.107727 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.607712695 +0000 UTC m=+161.298418959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.132090 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.208688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.209048 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.709036365 +0000 UTC m=+161.399742629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.314494 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.315030 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.815009391 +0000 UTC m=+161.505715655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.416159 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.416713 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.91668089 +0000 UTC m=+161.607387154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.419165 4739 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.443601 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.518549 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.519808 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.019793829 +0000 UTC m=+161.710500093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.623569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.623928 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.123915654 +0000 UTC m=+161.814621918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.734599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.735135 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.23511409 +0000 UTC m=+161.925820354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.781004 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:49 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:49 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:49 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.781053 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.783746 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p994f" event={"ID":"0bdb427a-96c7-4be9-8d54-c0926d447a36","Type":"ContainerStarted","Data":"6cd072e3f9ba88c3ba504bfd4431757413acbc0ae5ea611bfcc24f8acaacb2ba"} Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.810465 4739 generic.go:334] "Generic (PLEG): container finished" podID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerID="335d7f0f722f24d3def4e523e73292f4d06c20270508d0dacdeeb282c6de3299" exitCode=0 Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.810659 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerDied","Data":"335d7f0f722f24d3def4e523e73292f4d06c20270508d0dacdeeb282c6de3299"} Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.810694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerStarted","Data":"0ff96cbaaff2209979db14735415e92278e9af5295f5d7422450da587e74592e"} Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.838677 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.839068 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.339051891 +0000 UTC m=+162.029758155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.874758 4739 generic.go:334] "Generic (PLEG): container finished" podID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerID="eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433" exitCode=0 Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.874883 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerDied","Data":"eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433"} Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.874923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerStarted","Data":"8ba79c9d61bcfeac0a269e7655d837a83fd2729f207c3cf49a1f21c91afb909b"} Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.912854 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.941311 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.941993 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.441965413 +0000 UTC m=+162.132671677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.961159 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.044067 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:50 crc kubenswrapper[4739]: E0121 15:28:50.046623 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.546608443 +0000 UTC m=+162.237314707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.148008 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:50 crc kubenswrapper[4739]: E0121 15:28:50.148289 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.648273863 +0000 UTC m=+162.338980117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.175880 4739 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T15:28:49.419182737Z","Handler":null,"Name":""} Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.192349 4739 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.192628 4739 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.250756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.345972 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.346021 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.527422 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.557651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.727734 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.785143 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:50 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:50 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:50 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.785208 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.790112 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.817078 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.992167 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p994f" event={"ID":"0bdb427a-96c7-4be9-8d54-c0926d447a36","Type":"ContainerStarted","Data":"60f89354e3c33fae86cc9c4adb28b6fc40be3da19ff04b345a7c8430ed5dba46"} Jan 21 15:28:51 crc kubenswrapper[4739]: I0121 15:28:51.016404 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2dc0c86b-3d10-47be-ab85-dabae6379a3e","Type":"ContainerStarted","Data":"78df17093f9c32723aaeb7de84e4b8c803ecbfb77b44be0ea9c93c2b462d6d83"} Jan 21 15:28:51 crc kubenswrapper[4739]: I0121 15:28:51.075672 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-p994f" podStartSLOduration=16.075647933 podStartE2EDuration="16.075647933s" podCreationTimestamp="2026-01-21 15:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:51.069688414 +0000 UTC m=+162.760394678" watchObservedRunningTime="2026-01-21 15:28:51.075647933 +0000 UTC m=+162.766354197" Jan 21 15:28:51 crc kubenswrapper[4739]: I0121 15:28:51.794626 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:51 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:51 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:51 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:51 crc kubenswrapper[4739]: I0121 15:28:51.795041 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:51 crc kubenswrapper[4739]: I0121 15:28:51.986995 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.035378 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2dc0c86b-3d10-47be-ab85-dabae6379a3e","Type":"ContainerStarted","Data":"2c866a54bf7aaddc0ad89938cdc0283ca7027046c0b17416409d40ca9f7c13dd"} Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.091200 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.091891 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.101454 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.101698 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.120159 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.207306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.207970 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.225224 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.225307 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.229175 4739 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jbgcq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]log ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]etcd ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/max-in-flight-filter ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 15:28:52 crc kubenswrapper[4739]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 15:28:52 crc kubenswrapper[4739]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-startinformers ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 15:28:52 crc kubenswrapper[4739]: livez check failed Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.229247 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" podUID="079963dd-bb7d-472a-8af1-0f5386c5f32b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.330297 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.330402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.331041 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.391195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.416223 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.780573 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:52 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:52 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:52 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.780637 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.118076 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" event={"ID":"0e76bbec-8e96-4589-bca2-78d151595ddf","Type":"ContainerStarted","Data":"9cb5f44f60dc865e24fcf1602e334dc1e620dffa67ad590a7f5a509f38063137"} Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.169388 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.169371609 podStartE2EDuration="5.169371609s" podCreationTimestamp="2026-01-21 15:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:53.166229685 +0000 UTC m=+164.856935949" watchObservedRunningTime="2026-01-21 15:28:53.169371609 +0000 UTC m=+164.860077873" Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.348060 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:28:53 crc kubenswrapper[4739]: W0121 15:28:53.429855 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod128f7b08_b5b5_4e6f_9e64_db0ee3a08e5a.slice/crio-33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c WatchSource:0}: Error finding container 33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c: Status 404 returned error can't find the container with id 33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.613637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.790311 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:53 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:53 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:53 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.790552 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.183705 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a","Type":"ContainerStarted","Data":"33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c"} Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.206659 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" event={"ID":"0e76bbec-8e96-4589-bca2-78d151595ddf","Type":"ContainerStarted","Data":"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432"} Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.206767 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.224391 4739 generic.go:334] "Generic (PLEG): container finished" podID="1aac4099-92f1-43a7-96e1-50d45566cf54" containerID="5ad4bb35d6311c3aa3bed4bc5cef61cbb9fb6fa0ae39cdf622663c4df942e514" exitCode=0 Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.224559 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" event={"ID":"1aac4099-92f1-43a7-96e1-50d45566cf54","Type":"ContainerDied","Data":"5ad4bb35d6311c3aa3bed4bc5cef61cbb9fb6fa0ae39cdf622663c4df942e514"} Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.259248 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" podStartSLOduration=142.259230642 podStartE2EDuration="2m22.259230642s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:54.236771329 +0000 UTC m=+165.927477593" watchObservedRunningTime="2026-01-21 15:28:54.259230642 +0000 UTC m=+165.949936906" Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.262570 4739 generic.go:334] "Generic (PLEG): container finished" podID="2dc0c86b-3d10-47be-ab85-dabae6379a3e" containerID="2c866a54bf7aaddc0ad89938cdc0283ca7027046c0b17416409d40ca9f7c13dd" exitCode=0 Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.262616 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2dc0c86b-3d10-47be-ab85-dabae6379a3e","Type":"ContainerDied","Data":"2c866a54bf7aaddc0ad89938cdc0283ca7027046c0b17416409d40ca9f7c13dd"} Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.782549 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:54 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:54 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:54 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.782876 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.324223 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.324955 4739 generic.go:334] "Generic (PLEG): container finished" podID="128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" containerID="1584d176eadab380503feee7c6114f65c087f3684a5b25c8f9df5740d6008e4b" exitCode=0 Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.325364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a","Type":"ContainerDied","Data":"1584d176eadab380503feee7c6114f65c087f3684a5b25c8f9df5740d6008e4b"} Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.340029 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.497641 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.780673 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:55 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:55 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:55 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.780733 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.917452 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.003188 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.035992 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access\") pod \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.036525 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2dc0c86b-3d10-47be-ab85-dabae6379a3e" (UID: "2dc0c86b-3d10-47be-ab85-dabae6379a3e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.036430 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir\") pod \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.036932 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.044211 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2dc0c86b-3d10-47be-ab85-dabae6379a3e" (UID: "2dc0c86b-3d10-47be-ab85-dabae6379a3e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.137996 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp7vc\" (UniqueName: \"kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc\") pod \"1aac4099-92f1-43a7-96e1-50d45566cf54\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.138121 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume\") pod \"1aac4099-92f1-43a7-96e1-50d45566cf54\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.138179 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume\") pod \"1aac4099-92f1-43a7-96e1-50d45566cf54\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.138371 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.139213 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume" (OuterVolumeSpecName: "config-volume") pod "1aac4099-92f1-43a7-96e1-50d45566cf54" (UID: "1aac4099-92f1-43a7-96e1-50d45566cf54"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.142488 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc" (OuterVolumeSpecName: "kube-api-access-pp7vc") pod "1aac4099-92f1-43a7-96e1-50d45566cf54" (UID: "1aac4099-92f1-43a7-96e1-50d45566cf54"). InnerVolumeSpecName "kube-api-access-pp7vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.144788 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1aac4099-92f1-43a7-96e1-50d45566cf54" (UID: "1aac4099-92f1-43a7-96e1-50d45566cf54"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.239323 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.239363 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp7vc\" (UniqueName: \"kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.239375 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.324598 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mwzx6"] Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.363333 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.363335 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" event={"ID":"1aac4099-92f1-43a7-96e1-50d45566cf54","Type":"ContainerDied","Data":"39d103b1745e99501bca4604c10f6ec44434d60342c2c09fca8fd4ce921d8c6d"} Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.363381 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39d103b1745e99501bca4604c10f6ec44434d60342c2c09fca8fd4ce921d8c6d" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.381909 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.382769 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2dc0c86b-3d10-47be-ab85-dabae6379a3e","Type":"ContainerDied","Data":"78df17093f9c32723aaeb7de84e4b8c803ecbfb77b44be0ea9c93c2b462d6d83"} Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.382805 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78df17093f9c32723aaeb7de84e4b8c803ecbfb77b44be0ea9c93c2b462d6d83" Jan 21 15:28:56 crc kubenswrapper[4739]: W0121 15:28:56.438746 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8521870_96a9_4db6_94b3_9f69336d280b.slice/crio-f565a0e36a7ec133b4e6058e927d0db5ec58eb6a31f35bfe198c0542a4ce0a49 WatchSource:0}: Error finding container f565a0e36a7ec133b4e6058e927d0db5ec58eb6a31f35bfe198c0542a4ce0a49: Status 404 returned error can't find the container with id f565a0e36a7ec133b4e6058e927d0db5ec58eb6a31f35bfe198c0542a4ce0a49 Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.692411 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.752589 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir\") pod \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.752655 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" (UID: "128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.753437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access\") pod \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.753792 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.756627 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" (UID: "128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.780115 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:56 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:56 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:56 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.780174 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.855366 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.224199 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.237490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.403795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" event={"ID":"b8521870-96a9-4db6-94b3-9f69336d280b","Type":"ContainerStarted","Data":"f565a0e36a7ec133b4e6058e927d0db5ec58eb6a31f35bfe198c0542a4ce0a49"} Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.413127 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.413468 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a","Type":"ContainerDied","Data":"33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c"} Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.413485 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.606565 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.606620 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.606643 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.606729 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.721018 4739 patch_prober.go:28] interesting pod/console-f9d7485db-b6f6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.721081 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b6f6r" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.778770 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:57 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:57 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:57 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.778834 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:58 crc kubenswrapper[4739]: I0121 15:28:58.793278 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:58 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:58 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:58 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:58 crc kubenswrapper[4739]: I0121 15:28:58.793345 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:59 crc kubenswrapper[4739]: I0121 15:28:59.468508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" event={"ID":"b8521870-96a9-4db6-94b3-9f69336d280b","Type":"ContainerStarted","Data":"827549d753728490489d66e67f65b3e3fe678ff4b9b108b18afaeef2bd0dfb6c"} Jan 21 15:28:59 crc kubenswrapper[4739]: I0121 15:28:59.779346 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:59 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:59 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:59 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:59 crc kubenswrapper[4739]: I0121 15:28:59.779661 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:00 crc kubenswrapper[4739]: I0121 15:29:00.502318 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" event={"ID":"b8521870-96a9-4db6-94b3-9f69336d280b","Type":"ContainerStarted","Data":"fdd2cbda77efdfeb291c985e376316ada1ff60b0dc02d20615bee1a013a2e43e"} Jan 21 15:29:00 crc kubenswrapper[4739]: I0121 15:29:00.530674 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mwzx6" podStartSLOduration=148.530645277 podStartE2EDuration="2m28.530645277s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:29:00.516164059 +0000 UTC m=+172.206870333" watchObservedRunningTime="2026-01-21 15:29:00.530645277 +0000 UTC m=+172.221351541" Jan 21 15:29:00 crc kubenswrapper[4739]: I0121 15:29:00.779642 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:00 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:00 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:00 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:00 crc kubenswrapper[4739]: I0121 15:29:00.779713 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:01 crc kubenswrapper[4739]: I0121 15:29:01.778844 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:01 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:01 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:01 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:01 crc kubenswrapper[4739]: I0121 15:29:01.778918 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:02 crc kubenswrapper[4739]: I0121 15:29:02.779785 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:02 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:02 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:02 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:02 crc kubenswrapper[4739]: I0121 15:29:02.780290 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:03 crc kubenswrapper[4739]: I0121 15:29:03.779606 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:03 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:03 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:03 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:03 crc kubenswrapper[4739]: I0121 15:29:03.779662 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:04 crc kubenswrapper[4739]: I0121 15:29:04.779955 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:04 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:04 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:04 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:04 crc kubenswrapper[4739]: I0121 15:29:04.780324 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:05 crc kubenswrapper[4739]: I0121 15:29:05.222798 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:29:05 crc kubenswrapper[4739]: I0121 15:29:05.222870 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:29:05 crc kubenswrapper[4739]: I0121 15:29:05.807054 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:05 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:05 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:05 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:05 crc kubenswrapper[4739]: I0121 15:29:05.807134 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:06 crc kubenswrapper[4739]: I0121 15:29:06.779541 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:06 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:06 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:06 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:06 crc kubenswrapper[4739]: I0121 15:29:06.779626 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:07 crc kubenswrapper[4739]: I0121 15:29:07.610712 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:29:07 crc kubenswrapper[4739]: I0121 15:29:07.721695 4739 patch_prober.go:28] interesting pod/console-f9d7485db-b6f6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 21 15:29:07 crc kubenswrapper[4739]: I0121 15:29:07.721755 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b6f6r" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 21 15:29:07 crc kubenswrapper[4739]: I0121 15:29:07.779740 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:07 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:07 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:07 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:07 crc kubenswrapper[4739]: I0121 15:29:07.779796 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:08 crc kubenswrapper[4739]: I0121 15:29:08.781402 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:08 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:08 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:08 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:08 crc kubenswrapper[4739]: I0121 15:29:08.782439 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:09 crc kubenswrapper[4739]: I0121 15:29:09.779680 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:09 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:09 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:09 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:09 crc kubenswrapper[4739]: I0121 15:29:09.780319 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:10 crc kubenswrapper[4739]: I0121 15:29:10.779541 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:29:10 crc kubenswrapper[4739]: I0121 15:29:10.793028 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:29:10 crc kubenswrapper[4739]: I0121 15:29:10.795992 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:29:17 crc kubenswrapper[4739]: I0121 15:29:17.727372 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:29:17 crc kubenswrapper[4739]: I0121 15:29:17.732499 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:29:18 crc kubenswrapper[4739]: I0121 15:29:18.857493 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:29:22 crc kubenswrapper[4739]: I0121 15:29:22.008047 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.871426 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:29:29 crc kubenswrapper[4739]: E0121 15:29:29.872232 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872249 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: E0121 15:29:29.872263 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aac4099-92f1-43a7-96e1-50d45566cf54" containerName="collect-profiles" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872271 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aac4099-92f1-43a7-96e1-50d45566cf54" containerName="collect-profiles" Jan 21 15:29:29 crc kubenswrapper[4739]: E0121 15:29:29.872282 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc0c86b-3d10-47be-ab85-dabae6379a3e" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872291 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc0c86b-3d10-47be-ab85-dabae6379a3e" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872420 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872431 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc0c86b-3d10-47be-ab85-dabae6379a3e" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872442 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aac4099-92f1-43a7-96e1-50d45566cf54" containerName="collect-profiles" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872864 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.877552 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.884143 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.884983 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.959910 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.959975 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:30 crc kubenswrapper[4739]: I0121 15:29:30.061436 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:30 crc kubenswrapper[4739]: I0121 15:29:30.061782 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:30 crc kubenswrapper[4739]: I0121 15:29:30.061879 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:30 crc kubenswrapper[4739]: I0121 15:29:30.082024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:30 crc kubenswrapper[4739]: I0121 15:29:30.194254 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:32 crc kubenswrapper[4739]: E0121 15:29:32.427867 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 15:29:32 crc kubenswrapper[4739]: E0121 15:29:32.428460 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr9tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4sr9g_openshift-marketplace(db025233-2eca-4500-9e3c-67610f3f7a37): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:32 crc kubenswrapper[4739]: E0121 15:29:32.429715 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4sr9g" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.223447 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.223949 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.223994 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.224432 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.224522 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794" gracePeriod=600 Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.262485 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.271363 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.271750 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.332449 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.332511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.332546 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.433490 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.433543 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.433567 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.433939 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.433999 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.453676 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.609771 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.737799 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794" exitCode=0 Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.737856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794"} Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.402159 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4sr9g" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.489187 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.489361 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2lnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-t6phz_openshift-marketplace(465fbe23-a874-4ffb-9296-1b9fd4b8f1fb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.490210 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.490835 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-t6phz" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.500105 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6wj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kdd9z_openshift-marketplace(47ff9f0e-8d35-4492-a0f4-6b7b580afa21): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.501373 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kdd9z" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.509241 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.509406 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2pd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vwv56_openshift-marketplace(3f24f8c8-f70f-47a4-998b-72b7ba0875cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.511288 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vwv56" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.679712 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kdd9z" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.679960 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-t6phz" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.680012 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vwv56" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.754685 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.756222 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g6gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-w5v4k_openshift-marketplace(1ed3c687-16d6-444b-8964-37ed32442908): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.757351 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-w5v4k" podUID="1ed3c687-16d6-444b-8964-37ed32442908" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.783677 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.783806 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5fwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kk94c_openshift-marketplace(1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.785538 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kk94c" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.299560 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kk94c" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.305602 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-w5v4k" podUID="1ed3c687-16d6-444b-8964-37ed32442908" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.374215 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.374583 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2v47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-27hq7_openshift-marketplace(d5239161-d375-4078-8cbf-95219376f756): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.375973 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-27hq7" podUID="d5239161-d375-4078-8cbf-95219376f756" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.420388 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.421064 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gkvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rv98n_openshift-marketplace(fdd79051-71bc-4353-a426-f4a86fe4de42): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.422787 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rv98n" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" Jan 21 15:29:39 crc kubenswrapper[4739]: I0121 15:29:39.736793 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:29:39 crc kubenswrapper[4739]: W0121 15:29:39.746805 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1526a950_536b_4c8d_8444_686bead14eb3.slice/crio-9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728 WatchSource:0}: Error finding container 9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728: Status 404 returned error can't find the container with id 9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728 Jan 21 15:29:39 crc kubenswrapper[4739]: I0121 15:29:39.756751 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1526a950-536b-4c8d-8444-686bead14eb3","Type":"ContainerStarted","Data":"9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728"} Jan 21 15:29:39 crc kubenswrapper[4739]: I0121 15:29:39.759554 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459"} Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.773209 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-27hq7" podUID="d5239161-d375-4078-8cbf-95219376f756" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.773306 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rv98n" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" Jan 21 15:29:39 crc kubenswrapper[4739]: I0121 15:29:39.815600 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:29:40 crc kubenswrapper[4739]: I0121 15:29:40.764956 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"53ec1001-a151-445c-8422-6a4b1154727a","Type":"ContainerStarted","Data":"380bfe8ac5b3dcb1cf2981618f34e6481b2c791afaf293883f94de6db5e8c4b2"} Jan 21 15:29:40 crc kubenswrapper[4739]: I0121 15:29:40.765390 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"53ec1001-a151-445c-8422-6a4b1154727a","Type":"ContainerStarted","Data":"1754de96813b6f4e7b33008ea7f87c01f56eac5e8ceab4a855f42c2e0500fe5c"} Jan 21 15:29:40 crc kubenswrapper[4739]: I0121 15:29:40.775308 4739 generic.go:334] "Generic (PLEG): container finished" podID="1526a950-536b-4c8d-8444-686bead14eb3" containerID="892a036ec70ae705833c59d0ad63a9a2eda5cf629345a18ecca59000d8e63495" exitCode=0 Jan 21 15:29:40 crc kubenswrapper[4739]: I0121 15:29:40.775430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1526a950-536b-4c8d-8444-686bead14eb3","Type":"ContainerDied","Data":"892a036ec70ae705833c59d0ad63a9a2eda5cf629345a18ecca59000d8e63495"} Jan 21 15:29:40 crc kubenswrapper[4739]: I0121 15:29:40.793913 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.793895831 podStartE2EDuration="5.793895831s" podCreationTimestamp="2026-01-21 15:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:29:40.779955777 +0000 UTC m=+212.470662061" watchObservedRunningTime="2026-01-21 15:29:40.793895831 +0000 UTC m=+212.484602085" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.015428 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.117559 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access\") pod \"1526a950-536b-4c8d-8444-686bead14eb3\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.117619 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir\") pod \"1526a950-536b-4c8d-8444-686bead14eb3\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.117833 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1526a950-536b-4c8d-8444-686bead14eb3" (UID: "1526a950-536b-4c8d-8444-686bead14eb3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.123800 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1526a950-536b-4c8d-8444-686bead14eb3" (UID: "1526a950-536b-4c8d-8444-686bead14eb3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.218522 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.218556 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.784591 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1526a950-536b-4c8d-8444-686bead14eb3","Type":"ContainerDied","Data":"9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728"} Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.784631 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.784691 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.135035 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd"] Jan 21 15:30:00 crc kubenswrapper[4739]: E0121 15:30:00.135603 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1526a950-536b-4c8d-8444-686bead14eb3" containerName="pruner" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.135615 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1526a950-536b-4c8d-8444-686bead14eb3" containerName="pruner" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.136792 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1526a950-536b-4c8d-8444-686bead14eb3" containerName="pruner" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.137855 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.149915 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.149999 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.206394 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd"] Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.218015 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.218071 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmmbg\" (UniqueName: \"kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.218290 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.319850 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.320045 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.320150 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmmbg\" (UniqueName: \"kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.325549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.336581 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmmbg\" (UniqueName: \"kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.341674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.470467 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.751218 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd"] Jan 21 15:30:02 crc kubenswrapper[4739]: W0121 15:30:02.765723 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f378ddb_72bf_4542_bec3_ce2652d0ab02.slice/crio-b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071 WatchSource:0}: Error finding container b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071: Status 404 returned error can't find the container with id b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071 Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.915970 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerStarted","Data":"f6a2a63f31b53d68b2ba0527a1835c9d937f1429902017b62ede865cd8236d80"} Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.917410 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerStarted","Data":"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522"} Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.924330 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerStarted","Data":"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319"} Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.945851 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerStarted","Data":"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4"} Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.961384 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerStarted","Data":"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d"} Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.983298 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerStarted","Data":"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5"} Jan 21 15:30:03 crc kubenswrapper[4739]: I0121 15:30:03.005209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" event={"ID":"3f378ddb-72bf-4542-bec3-ce2652d0ab02","Type":"ContainerStarted","Data":"d15b945816d6b79eb9e01377f4a26669eb533bef1836689547fca7a0b232814d"} Jan 21 15:30:03 crc kubenswrapper[4739]: I0121 15:30:03.005274 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" event={"ID":"3f378ddb-72bf-4542-bec3-ce2652d0ab02","Type":"ContainerStarted","Data":"b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.012671 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerID="30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.012730 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerDied","Data":"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.020000 4739 generic.go:334] "Generic (PLEG): container finished" podID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerID="f6a2a63f31b53d68b2ba0527a1835c9d937f1429902017b62ede865cd8236d80" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.020067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerDied","Data":"f6a2a63f31b53d68b2ba0527a1835c9d937f1429902017b62ede865cd8236d80"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.022801 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ed3c687-16d6-444b-8964-37ed32442908" containerID="c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.022869 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerDied","Data":"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.024632 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerID="e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.024693 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerDied","Data":"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.026385 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f378ddb-72bf-4542-bec3-ce2652d0ab02" containerID="d15b945816d6b79eb9e01377f4a26669eb533bef1836689547fca7a0b232814d" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.026424 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" event={"ID":"3f378ddb-72bf-4542-bec3-ce2652d0ab02","Type":"ContainerDied","Data":"d15b945816d6b79eb9e01377f4a26669eb533bef1836689547fca7a0b232814d"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.030375 4739 generic.go:334] "Generic (PLEG): container finished" podID="db025233-2eca-4500-9e3c-67610f3f7a37" containerID="3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.030445 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerDied","Data":"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.033341 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerStarted","Data":"238b4964e5378b09424a9074a18cf629295f29f20c74d61d94fe2a47c148abb0"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.043792 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5239161-d375-4078-8cbf-95219376f756" containerID="351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.043884 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerDied","Data":"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.047004 4739 generic.go:334] "Generic (PLEG): container finished" podID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerID="d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.047028 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerDied","Data":"d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43"} Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.054263 4739 generic.go:334] "Generic (PLEG): container finished" podID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerID="238b4964e5378b09424a9074a18cf629295f29f20c74d61d94fe2a47c148abb0" exitCode=0 Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.055380 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerDied","Data":"238b4964e5378b09424a9074a18cf629295f29f20c74d61d94fe2a47c148abb0"} Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.302605 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.362803 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmmbg\" (UniqueName: \"kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg\") pod \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.362908 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume\") pod \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.362941 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume\") pod \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.365330 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f378ddb-72bf-4542-bec3-ce2652d0ab02" (UID: "3f378ddb-72bf-4542-bec3-ce2652d0ab02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.368769 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f378ddb-72bf-4542-bec3-ce2652d0ab02" (UID: "3f378ddb-72bf-4542-bec3-ce2652d0ab02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.370242 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg" (OuterVolumeSpecName: "kube-api-access-bmmbg") pod "3f378ddb-72bf-4542-bec3-ce2652d0ab02" (UID: "3f378ddb-72bf-4542-bec3-ce2652d0ab02"). InnerVolumeSpecName "kube-api-access-bmmbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.464367 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmmbg\" (UniqueName: \"kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.464397 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.464408 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:06 crc kubenswrapper[4739]: I0121 15:30:06.061353 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" event={"ID":"3f378ddb-72bf-4542-bec3-ce2652d0ab02","Type":"ContainerDied","Data":"b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071"} Jan 21 15:30:06 crc kubenswrapper[4739]: I0121 15:30:06.062078 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071" Jan 21 15:30:06 crc kubenswrapper[4739]: I0121 15:30:06.061546 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:07 crc kubenswrapper[4739]: I0121 15:30:07.068631 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerStarted","Data":"a0779e7801d7bb86f5802cfcd1ec49b9ca54f15c1e2a86b44e121cdb3163ddc3"} Jan 21 15:30:07 crc kubenswrapper[4739]: I0121 15:30:07.089443 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kk94c" podStartSLOduration=4.873240454 podStartE2EDuration="1m22.089421304s" podCreationTimestamp="2026-01-21 15:28:45 +0000 UTC" firstStartedPulling="2026-01-21 15:28:48.692001403 +0000 UTC m=+160.382707667" lastFinishedPulling="2026-01-21 15:30:05.908182253 +0000 UTC m=+237.598888517" observedRunningTime="2026-01-21 15:30:07.085477217 +0000 UTC m=+238.776183481" watchObservedRunningTime="2026-01-21 15:30:07.089421304 +0000 UTC m=+238.780127578" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.076951 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerStarted","Data":"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.079994 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerStarted","Data":"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.082986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerStarted","Data":"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.085867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerStarted","Data":"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.088314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerStarted","Data":"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.091731 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerStarted","Data":"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.127933 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4sr9g" podStartSLOduration=5.59392433 podStartE2EDuration="1m25.127909359s" podCreationTimestamp="2026-01-21 15:28:43 +0000 UTC" firstStartedPulling="2026-01-21 15:28:47.660529858 +0000 UTC m=+159.351236122" lastFinishedPulling="2026-01-21 15:30:07.194514887 +0000 UTC m=+238.885221151" observedRunningTime="2026-01-21 15:30:08.104368177 +0000 UTC m=+239.795074441" watchObservedRunningTime="2026-01-21 15:30:08.127909359 +0000 UTC m=+239.818615623" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.129331 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-27hq7" podStartSLOduration=5.44683411 podStartE2EDuration="1m25.129303667s" podCreationTimestamp="2026-01-21 15:28:43 +0000 UTC" firstStartedPulling="2026-01-21 15:28:47.631308173 +0000 UTC m=+159.322014437" lastFinishedPulling="2026-01-21 15:30:07.31377773 +0000 UTC m=+239.004483994" observedRunningTime="2026-01-21 15:30:08.125103344 +0000 UTC m=+239.815809608" watchObservedRunningTime="2026-01-21 15:30:08.129303667 +0000 UTC m=+239.820009971" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.154564 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5v4k" podStartSLOduration=4.603573719 podStartE2EDuration="1m23.154543445s" podCreationTimestamp="2026-01-21 15:28:45 +0000 UTC" firstStartedPulling="2026-01-21 15:28:48.694955712 +0000 UTC m=+160.385661976" lastFinishedPulling="2026-01-21 15:30:07.245925448 +0000 UTC m=+238.936631702" observedRunningTime="2026-01-21 15:30:08.151117582 +0000 UTC m=+239.841823846" watchObservedRunningTime="2026-01-21 15:30:08.154543445 +0000 UTC m=+239.845249709" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.177036 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kdd9z" podStartSLOduration=4.8456902809999995 podStartE2EDuration="1m22.177018119s" podCreationTimestamp="2026-01-21 15:28:46 +0000 UTC" firstStartedPulling="2026-01-21 15:28:49.881460839 +0000 UTC m=+161.572167103" lastFinishedPulling="2026-01-21 15:30:07.212788677 +0000 UTC m=+238.903494941" observedRunningTime="2026-01-21 15:30:08.175551099 +0000 UTC m=+239.866257363" watchObservedRunningTime="2026-01-21 15:30:08.177018119 +0000 UTC m=+239.867724383" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.206641 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vwv56" podStartSLOduration=5.438095525 podStartE2EDuration="1m25.206622584s" podCreationTimestamp="2026-01-21 15:28:43 +0000 UTC" firstStartedPulling="2026-01-21 15:28:47.62449179 +0000 UTC m=+159.315198054" lastFinishedPulling="2026-01-21 15:30:07.393018849 +0000 UTC m=+239.083725113" observedRunningTime="2026-01-21 15:30:08.203421528 +0000 UTC m=+239.894127792" watchObservedRunningTime="2026-01-21 15:30:08.206622584 +0000 UTC m=+239.897328858" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.246779 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rv98n" podStartSLOduration=5.704531753 podStartE2EDuration="1m25.246755052s" podCreationTimestamp="2026-01-21 15:28:43 +0000 UTC" firstStartedPulling="2026-01-21 15:28:47.619568219 +0000 UTC m=+159.310274483" lastFinishedPulling="2026-01-21 15:30:07.161791518 +0000 UTC m=+238.852497782" observedRunningTime="2026-01-21 15:30:08.245123497 +0000 UTC m=+239.935829771" watchObservedRunningTime="2026-01-21 15:30:08.246755052 +0000 UTC m=+239.937461316" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.588137 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.588424 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.760708 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.761321 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.928405 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.928490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:30:14 crc kubenswrapper[4739]: I0121 15:30:14.126605 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:30:14 crc kubenswrapper[4739]: I0121 15:30:14.126653 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:30:15 crc kubenswrapper[4739]: I0121 15:30:15.479286 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:30:15 crc kubenswrapper[4739]: I0121 15:30:15.479589 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:30:15 crc kubenswrapper[4739]: I0121 15:30:15.892786 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:15 crc kubenswrapper[4739]: I0121 15:30:15.892882 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.520245 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.521328 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.521502 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.522274 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.524241 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.527490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.566180 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.570528 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.570981 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.579628 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.580428 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.101988 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vdvrk"] Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.238032 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.318928 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.319216 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.369770 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.555256 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v4k"] Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.725448 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.726570 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f378ddb-72bf-4542-bec3-ce2652d0ab02" containerName="collect-profiles" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.726743 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f378ddb-72bf-4542-bec3-ce2652d0ab02" containerName="collect-profiles" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.727080 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f378ddb-72bf-4542-bec3-ce2652d0ab02" containerName="collect-profiles" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.727778 4739 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.727985 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728101 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728496 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728518 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728530 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728538 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728551 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728559 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728567 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728574 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728587 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728593 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728606 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728612 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729495 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e" gracePeriod=15 Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729511 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec" gracePeriod=15 Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729546 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2" gracePeriod=15 Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729557 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec" gracePeriod=15 Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729566 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e" gracePeriod=15 Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729620 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729958 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729973 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729983 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729994 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.730005 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.730158 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.730170 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.735741 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.778366 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838356 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838424 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838506 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838523 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940002 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940111 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940527 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940487 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940573 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940703 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940728 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940930 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.941056 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.941625 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.071978 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.155632 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.157201 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.157939 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec" exitCode=0 Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.157982 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e" exitCode=2 Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.158077 4739 scope.go:117] "RemoveContainer" containerID="7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.159102 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w5v4k" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="registry-server" containerID="cri-o://5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705" gracePeriod=2 Jan 21 15:30:18 crc kubenswrapper[4739]: E0121 15:30:18.159943 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-w5v4k.188cc8b175b1517a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-w5v4k,UID:1ed3c687-16d6-444b-8964-37ed32442908,APIVersion:v1,ResourceVersion:28001,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:30:18.159083898 +0000 UTC m=+249.849790162,LastTimestamp:2026-01-21 15:30:18.159083898 +0000 UTC m=+249.849790162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.160139 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.160617 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.201286 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.201773 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.201977 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.202328 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.786386 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.787063 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.787708 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.167634 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.169126 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec" exitCode=0 Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.169291 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2" exitCode=0 Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.171678 4739 generic.go:334] "Generic (PLEG): container finished" podID="53ec1001-a151-445c-8422-6a4b1154727a" containerID="380bfe8ac5b3dcb1cf2981618f34e6481b2c791afaf293883f94de6db5e8c4b2" exitCode=0 Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.171917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"53ec1001-a151-445c-8422-6a4b1154727a","Type":"ContainerDied","Data":"380bfe8ac5b3dcb1cf2981618f34e6481b2c791afaf293883f94de6db5e8c4b2"} Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.172714 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.172946 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.173169 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.173376 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.179020 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.183716 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d8a09bc840e4e8b52b820682d53b7c047b157a1bcc2311c802c43745ca4ad2c9"} Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.184287 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.184706 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.184950 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.185230 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.185367 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.191122 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ed3c687-16d6-444b-8964-37ed32442908" containerID="5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705" exitCode=0 Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.191805 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerDied","Data":"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705"} Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.191959 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerDied","Data":"38c036115d6050b2dee2a84063aba041580afa29084861cecfd5cc9c6d4207ed"} Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.191982 4739 scope.go:117] "RemoveContainer" containerID="5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.204736 4739 scope.go:117] "RemoveContainer" containerID="c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.223777 4739 scope.go:117] "RemoveContainer" containerID="04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.248294 4739 scope.go:117] "RemoveContainer" containerID="5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705" Jan 21 15:30:20 crc kubenswrapper[4739]: E0121 15:30:20.248807 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705\": container with ID starting with 5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705 not found: ID does not exist" containerID="5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.248867 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705"} err="failed to get container status \"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705\": rpc error: code = NotFound desc = could not find container \"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705\": container with ID starting with 5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705 not found: ID does not exist" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.248894 4739 scope.go:117] "RemoveContainer" containerID="c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522" Jan 21 15:30:20 crc kubenswrapper[4739]: E0121 15:30:20.249453 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522\": container with ID starting with c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522 not found: ID does not exist" containerID="c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.249484 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522"} err="failed to get container status \"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522\": rpc error: code = NotFound desc = could not find container \"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522\": container with ID starting with c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522 not found: ID does not exist" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.249502 4739 scope.go:117] "RemoveContainer" containerID="04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8" Jan 21 15:30:20 crc kubenswrapper[4739]: E0121 15:30:20.249897 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8\": container with ID starting with 04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8 not found: ID does not exist" containerID="04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.249957 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8"} err="failed to get container status \"04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8\": rpc error: code = NotFound desc = could not find container \"04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8\": container with ID starting with 04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8 not found: ID does not exist" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.269651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content\") pod \"1ed3c687-16d6-444b-8964-37ed32442908\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.269774 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g6gn\" (UniqueName: \"kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn\") pod \"1ed3c687-16d6-444b-8964-37ed32442908\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.269807 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities\") pod \"1ed3c687-16d6-444b-8964-37ed32442908\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.270906 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities" (OuterVolumeSpecName: "utilities") pod "1ed3c687-16d6-444b-8964-37ed32442908" (UID: "1ed3c687-16d6-444b-8964-37ed32442908"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.275975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn" (OuterVolumeSpecName: "kube-api-access-7g6gn") pod "1ed3c687-16d6-444b-8964-37ed32442908" (UID: "1ed3c687-16d6-444b-8964-37ed32442908"). InnerVolumeSpecName "kube-api-access-7g6gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.292592 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ed3c687-16d6-444b-8964-37ed32442908" (UID: "1ed3c687-16d6-444b-8964-37ed32442908"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.371409 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.371445 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.371458 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g6gn\" (UniqueName: \"kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.399425 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.402131 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.402658 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.403205 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.403497 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.472481 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access\") pod \"53ec1001-a151-445c-8422-6a4b1154727a\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.472663 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock\") pod \"53ec1001-a151-445c-8422-6a4b1154727a\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.472708 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir\") pod \"53ec1001-a151-445c-8422-6a4b1154727a\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.473020 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "53ec1001-a151-445c-8422-6a4b1154727a" (UID: "53ec1001-a151-445c-8422-6a4b1154727a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.473066 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock" (OuterVolumeSpecName: "var-lock") pod "53ec1001-a151-445c-8422-6a4b1154727a" (UID: "53ec1001-a151-445c-8422-6a4b1154727a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.475994 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "53ec1001-a151-445c-8422-6a4b1154727a" (UID: "53ec1001-a151-445c-8422-6a4b1154727a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.574288 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.574340 4739 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.574359 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.136299 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.138072 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.138875 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.139384 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.139940 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.140239 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.140482 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180704 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180766 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180799 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180901 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180918 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180949 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.181340 4739 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.181358 4739 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.181370 4739 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.198045 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.199054 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.199514 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.200260 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerStarted","Data":"afd7c583a63895700341309c7930d237c4b1a03b697795f277da8caadca1b899"} Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.200715 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.201101 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.201587 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.202493 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.202754 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.203024 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.203032 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.203869 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.204289 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e" exitCode=0 Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.204304 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.204379 4739 scope.go:117] "RemoveContainer" containerID="8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.204509 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.204717 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.206056 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.206315 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.206619 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.207192 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.207129 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"53ec1001-a151-445c-8422-6a4b1154727a","Type":"ContainerDied","Data":"1754de96813b6f4e7b33008ea7f87c01f56eac5e8ceab4a855f42c2e0500fe5c"} Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.207471 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1754de96813b6f4e7b33008ea7f87c01f56eac5e8ceab4a855f42c2e0500fe5c" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.208266 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.208680 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.210483 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.210615 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3a18e0b4c2845ebaec2de431862425d50b9f57e91f87bd8529f9973fdb2f83b4"} Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.214100 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.214459 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.214662 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.214884 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.215366 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.223172 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.223372 4739 scope.go:117] "RemoveContainer" containerID="fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.223893 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.224213 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.224616 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.224928 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.225124 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.225310 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.226112 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.226460 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.227332 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.227718 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.229151 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.229393 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.245112 4739 scope.go:117] "RemoveContainer" containerID="5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.270883 4739 scope.go:117] "RemoveContainer" containerID="f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.286212 4739 scope.go:117] "RemoveContainer" containerID="f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.305370 4739 scope.go:117] "RemoveContainer" containerID="1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.337739 4739 scope.go:117] "RemoveContainer" containerID="8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.338688 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\": container with ID starting with 8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec not found: ID does not exist" containerID="8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.338733 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec"} err="failed to get container status \"8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\": rpc error: code = NotFound desc = could not find container \"8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\": container with ID starting with 8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec not found: ID does not exist" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.338768 4739 scope.go:117] "RemoveContainer" containerID="fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.339352 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\": container with ID starting with fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec not found: ID does not exist" containerID="fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.339558 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec"} err="failed to get container status \"fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\": rpc error: code = NotFound desc = could not find container \"fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\": container with ID starting with fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec not found: ID does not exist" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.339602 4739 scope.go:117] "RemoveContainer" containerID="5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.339994 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\": container with ID starting with 5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2 not found: ID does not exist" containerID="5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.340105 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2"} err="failed to get container status \"5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\": rpc error: code = NotFound desc = could not find container \"5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\": container with ID starting with 5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2 not found: ID does not exist" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.340194 4739 scope.go:117] "RemoveContainer" containerID="f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.342840 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\": container with ID starting with f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e not found: ID does not exist" containerID="f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.342882 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e"} err="failed to get container status \"f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\": rpc error: code = NotFound desc = could not find container \"f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\": container with ID starting with f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e not found: ID does not exist" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.342911 4739 scope.go:117] "RemoveContainer" containerID="f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.343320 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\": container with ID starting with f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e not found: ID does not exist" containerID="f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.343414 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e"} err="failed to get container status \"f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\": rpc error: code = NotFound desc = could not find container \"f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\": container with ID starting with f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e not found: ID does not exist" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.343488 4739 scope.go:117] "RemoveContainer" containerID="1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.343925 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\": container with ID starting with 1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785 not found: ID does not exist" containerID="1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.343988 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785"} err="failed to get container status \"1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\": rpc error: code = NotFound desc = could not find container \"1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\": container with ID starting with 1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785 not found: ID does not exist" Jan 21 15:30:22 crc kubenswrapper[4739]: I0121 15:30:22.790424 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 15:30:22 crc kubenswrapper[4739]: E0121 15:30:22.797512 4739 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" volumeName="registry-storage" Jan 21 15:30:25 crc kubenswrapper[4739]: E0121 15:30:25.264016 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-w5v4k.188cc8b175b1517a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-w5v4k,UID:1ed3c687-16d6-444b-8964-37ed32442908,APIVersion:v1,ResourceVersion:28001,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:30:18.159083898 +0000 UTC m=+249.849790162,LastTimestamp:2026-01-21 15:30:18.159083898 +0000 UTC m=+249.849790162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.234137 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.234612 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.234937 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.235276 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.235608 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:26 crc kubenswrapper[4739]: I0121 15:30:26.235636 4739 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.235905 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="200ms" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.437605 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="400ms" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.838745 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="800ms" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.157872 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:30:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:30:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:30:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:30:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.158188 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.158499 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.158811 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.159100 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.159120 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.338794 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.338966 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.389550 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.390229 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.390768 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.391237 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.391563 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.391917 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.640291 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="1.6s" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.782138 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.783141 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.783714 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.784080 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.784353 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.784571 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.808494 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.808898 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.809345 4739 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.809902 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.252959 4739 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="038563e90b04604060fd62812f3236cf3d1affc38b19e653b6364b963b226881" exitCode=0 Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.253084 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"038563e90b04604060fd62812f3236cf3d1affc38b19e653b6364b963b226881"} Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.253327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9b050cab75fadc11ebc2a5330b5baa3bcdf531a0d495bacd6060622440cdb13a"} Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.253700 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.253894 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.254333 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: E0121 15:30:28.254419 4739 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.254590 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.254859 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.255092 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.255364 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.301525 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.302063 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.305068 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.305554 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.305895 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.306167 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:29 crc kubenswrapper[4739]: I0121 15:30:29.270467 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2d63032c44fe8349b25cff58c8ef5ab9542eb4a68f36b7f4a71dc98b6b8a82ae"} Jan 21 15:30:29 crc kubenswrapper[4739]: I0121 15:30:29.270860 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9cc9647c54c3437300b7db5fba6bccc9e8ab58132f36b85c96cb5a26edcbb9e6"} Jan 21 15:30:29 crc kubenswrapper[4739]: I0121 15:30:29.270875 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9715fa6de4a371007f283eedae5d0e9bb20dd212a6c6f50f48f133d67e0ba8f2"} Jan 21 15:30:29 crc kubenswrapper[4739]: I0121 15:30:29.270890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0c656bb215db7f42a7f2dededf3b38db2c8480d9b24d13447b591eae621b1293"} Jan 21 15:30:30 crc kubenswrapper[4739]: I0121 15:30:30.277863 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1a9f985e0d6d217b9f4cc56bd8c710591c411aaaa0929833a1a19807db035b4e"} Jan 21 15:30:30 crc kubenswrapper[4739]: I0121 15:30:30.278116 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:30 crc kubenswrapper[4739]: I0121 15:30:30.278129 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:30 crc kubenswrapper[4739]: I0121 15:30:30.278306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:32 crc kubenswrapper[4739]: I0121 15:30:32.810256 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:32 crc kubenswrapper[4739]: I0121 15:30:32.810592 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:32 crc kubenswrapper[4739]: I0121 15:30:32.819505 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:35 crc kubenswrapper[4739]: I0121 15:30:35.286310 4739 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:35 crc kubenswrapper[4739]: I0121 15:30:35.313847 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:30:35 crc kubenswrapper[4739]: I0121 15:30:35.313894 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c" exitCode=1 Jan 21 15:30:35 crc kubenswrapper[4739]: I0121 15:30:35.313922 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c"} Jan 21 15:30:35 crc kubenswrapper[4739]: I0121 15:30:35.314321 4739 scope.go:117] "RemoveContainer" containerID="d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c" Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.322031 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.322385 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c"} Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.322787 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.322809 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.327474 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.343477 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="35ebc9be-05a6-4aa5-bdab-76b1f81615a4" Jan 21 15:30:37 crc kubenswrapper[4739]: I0121 15:30:37.330585 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:37 crc kubenswrapper[4739]: I0121 15:30:37.331213 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:37 crc kubenswrapper[4739]: I0121 15:30:37.674287 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:30:37 crc kubenswrapper[4739]: I0121 15:30:37.674513 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 15:30:37 crc kubenswrapper[4739]: I0121 15:30:37.675103 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 15:30:38 crc kubenswrapper[4739]: I0121 15:30:38.811888 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="35ebc9be-05a6-4aa5-bdab-76b1f81615a4" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.408486 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.432492 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.587422 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.603462 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.769902 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.921800 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.123693 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" containerID="cri-o://6ed95e5a73be73df1c1c1658043806f52b956c0f9511221fe57e1834528eb5c2" gracePeriod=15 Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.124524 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.160680 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.189741 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.216559 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.308303 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.335690 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.412681 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.489294 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.573309 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.585066 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.694796 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.818502 4739 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.818680 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.833955 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.893123 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.948217 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.956371 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.323923 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.362188 4739 generic.go:334] "Generic (PLEG): container finished" podID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerID="6ed95e5a73be73df1c1c1658043806f52b956c0f9511221fe57e1834528eb5c2" exitCode=0 Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.362239 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" event={"ID":"a82d6ee2-dfeb-42c9-9102-15b80cc3c055","Type":"ContainerDied","Data":"6ed95e5a73be73df1c1c1658043806f52b956c0f9511221fe57e1834528eb5c2"} Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.454653 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.491593 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.509141 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.549229 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.598236 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.608111 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.688868 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.688918 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.688950 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.688972 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.688997 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689021 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdv4p\" (UniqueName: \"kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689058 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689079 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689104 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689126 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689154 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689180 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689241 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689267 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.690748 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.691195 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.691850 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.691894 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.699193 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.699505 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p" (OuterVolumeSpecName: "kube-api-access-tdv4p") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "kube-api-access-tdv4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.699671 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.699964 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.700438 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.700654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.700923 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.701204 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.701359 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790898 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790951 4739 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790964 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790975 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790984 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790995 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791004 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791013 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdv4p\" (UniqueName: \"kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791023 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791031 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791043 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791052 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791061 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791069 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.911642 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.930674 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.985895 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.052424 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.071554 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.300634 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.368836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" event={"ID":"a82d6ee2-dfeb-42c9-9102-15b80cc3c055","Type":"ContainerDied","Data":"0797ec5703e54e95d565c3f72eae2eb927cff79ac4d8eb9ae951b8b30e7e3b11"} Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.368891 4739 scope.go:117] "RemoveContainer" containerID="6ed95e5a73be73df1c1c1658043806f52b956c0f9511221fe57e1834528eb5c2" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.369234 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.383603 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.523897 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.555614 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.565912 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.585963 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.590329 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.630639 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.708602 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.720582 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.851769 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.892680 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.908143 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.912102 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.925867 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.963960 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.023019 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.051762 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.054669 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.095380 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.194193 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.344093 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.370557 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.478268 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.601389 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.690399 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.782991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.817966 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.848409 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.981943 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.106750 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.112398 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.198769 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.221097 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.284148 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.292839 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.399373 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.441452 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.547774 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.602961 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.647208 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.675953 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.794207 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.889245 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.944979 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.007170 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.016752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.048388 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.054092 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.085089 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.101138 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.156176 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.157408 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.244236 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.253603 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.521626 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.670413 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.673911 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.673966 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.954531 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 15:30:48 crc kubenswrapper[4739]: I0121 15:30:48.132092 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 15:30:48 crc kubenswrapper[4739]: I0121 15:30:48.280758 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 15:30:48 crc kubenswrapper[4739]: I0121 15:30:48.620083 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 15:30:48 crc kubenswrapper[4739]: I0121 15:30:48.715705 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.023180 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.101022 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.167245 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.199936 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.364578 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.417006 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.804118 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.897518 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 15:30:50 crc kubenswrapper[4739]: I0121 15:30:50.118395 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:30:50 crc kubenswrapper[4739]: I0121 15:30:50.125278 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 15:30:50 crc kubenswrapper[4739]: I0121 15:30:50.499249 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 15:30:50 crc kubenswrapper[4739]: I0121 15:30:50.753926 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:30:51 crc kubenswrapper[4739]: I0121 15:30:51.654520 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:30:51 crc kubenswrapper[4739]: I0121 15:30:51.740796 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 15:30:51 crc kubenswrapper[4739]: I0121 15:30:51.803903 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 15:30:51 crc kubenswrapper[4739]: I0121 15:30:51.913861 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.161972 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.398899 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.409128 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.564941 4739 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.585394 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.824200 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.885788 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.977019 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.047266 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.076118 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.209456 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.471413 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.546250 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.553419 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.667374 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.689612 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.738650 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.834479 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.894323 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.969323 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.982216 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.063691 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.067870 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.456051 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.468418 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.561298 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.605117 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.742238 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.796780 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.866163 4739 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.866524 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=37.866503707 podStartE2EDuration="37.866503707s" podCreationTimestamp="2026-01-21 15:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:30:35.046249364 +0000 UTC m=+266.736955638" watchObservedRunningTime="2026-01-21 15:30:54.866503707 +0000 UTC m=+286.557209981" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.874745 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t6phz" podStartSLOduration=38.682853847 podStartE2EDuration="2m8.874725554s" podCreationTimestamp="2026-01-21 15:28:46 +0000 UTC" firstStartedPulling="2026-01-21 15:28:49.857187368 +0000 UTC m=+161.547893622" lastFinishedPulling="2026-01-21 15:30:20.049059065 +0000 UTC m=+251.739765329" observedRunningTime="2026-01-21 15:30:35.012196676 +0000 UTC m=+266.702902940" watchObservedRunningTime="2026-01-21 15:30:54.874725554 +0000 UTC m=+286.565431818" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.876854 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/redhat-marketplace-w5v4k","openshift-authentication/oauth-openshift-558db77b4-vdvrk"] Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.876927 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-fqqqm","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:30:54 crc kubenswrapper[4739]: E0121 15:30:54.877251 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877270 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" Jan 21 15:30:54 crc kubenswrapper[4739]: E0121 15:30:54.877289 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="registry-server" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877298 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="registry-server" Jan 21 15:30:54 crc kubenswrapper[4739]: E0121 15:30:54.877319 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="extract-utilities" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877333 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="extract-utilities" Jan 21 15:30:54 crc kubenswrapper[4739]: E0121 15:30:54.877361 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="extract-content" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877370 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="extract-content" Jan 21 15:30:54 crc kubenswrapper[4739]: E0121 15:30:54.877398 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ec1001-a151-445c-8422-6a4b1154727a" containerName="installer" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877408 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ec1001-a151-445c-8422-6a4b1154727a" containerName="installer" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877622 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ec1001-a151-445c-8422-6a4b1154727a" containerName="installer" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877654 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="registry-server" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877672 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.878554 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.887109 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.887454 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.888065 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.888218 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.888338 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.888448 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.888571 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.889927 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.890012 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.890178 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.893980 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.903320 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.903546 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.904970 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.906007 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.908200 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.911247 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.914794 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.917389 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.917371298 podStartE2EDuration="19.917371298s" podCreationTimestamp="2026-01-21 15:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:30:54.905254495 +0000 UTC m=+286.595960759" watchObservedRunningTime="2026-01-21 15:30:54.917371298 +0000 UTC m=+286.608077562" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.928399 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.928653 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.928785 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.928894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.928982 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-dir\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929055 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929144 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929245 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929331 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-policies\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929410 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929480 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929557 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929632 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzppz\" (UniqueName: \"kubernetes.io/projected/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-kube-api-access-gzppz\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929706 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.931188 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.932808 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031469 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzppz\" (UniqueName: \"kubernetes.io/projected/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-kube-api-access-gzppz\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031494 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031528 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031546 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031585 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031607 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-dir\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031651 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031674 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031692 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-policies\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031710 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031724 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.032492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.033979 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-policies\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.034231 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-dir\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.035610 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.036116 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.038621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.039157 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.039365 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.039528 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.039889 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.045094 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.048258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.049424 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.051649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.053316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzppz\" (UniqueName: \"kubernetes.io/projected/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-kube-api-access-gzppz\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.221592 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.269997 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.478935 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.487349 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.586912 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.617180 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.697291 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.711424 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.716375 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.750977 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.833255 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-fqqqm"] Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.018548 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.034001 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.081298 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.136893 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.257954 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.272082 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.343527 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.406734 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.427620 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" event={"ID":"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a","Type":"ContainerStarted","Data":"df39f7608643e92f76e9b87b6981edcaf85a6001c1a41cc5bb1a72b5e139709b"} Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.766051 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.789252 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed3c687-16d6-444b-8964-37ed32442908" path="/var/lib/kubelet/pods/1ed3c687-16d6-444b-8964-37ed32442908/volumes" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.790115 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" path="/var/lib/kubelet/pods/a82d6ee2-dfeb-42c9-9102-15b80cc3c055/volumes" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.884205 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.885505 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.888582 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.936261 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.936362 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.992446 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.045157 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.056767 4739 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.205690 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.443987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" event={"ID":"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a","Type":"ContainerStarted","Data":"86a29ccab9cfaf9a1ef1191db410babdf59e216261d9ddeea516cfd0bf82b97b"} Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.444331 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.446158 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.449380 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.483532 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" podStartSLOduration=40.48351669 podStartE2EDuration="40.48351669s" podCreationTimestamp="2026-01-21 15:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:30:57.464497055 +0000 UTC m=+289.155203329" watchObservedRunningTime="2026-01-21 15:30:57.48351669 +0000 UTC m=+289.174222954" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.552052 4739 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.572197 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.674761 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.674812 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.674887 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.675460 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.675601 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c" gracePeriod=30 Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.686745 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.777764 4739 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.778027 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3a18e0b4c2845ebaec2de431862425d50b9f57e91f87bd8529f9973fdb2f83b4" gracePeriod=5 Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.901767 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.994730 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.067750 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.079448 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.107223 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.308839 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.414709 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.439495 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.454883 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.555420 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.899384 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.945946 4739 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.971752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.975599 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.025889 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.040714 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.211764 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.260661 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.329928 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.430574 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.714547 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.824556 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.981900 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.067305 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.088999 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.102028 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.113770 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.265084 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.369982 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.420660 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.532859 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.699263 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.738787 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.978882 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.017543 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.088322 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.127218 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.289915 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.304481 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.414515 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.567715 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.568529 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.620480 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.631852 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.639005 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.745015 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.763367 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.930584 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 15:31:02 crc kubenswrapper[4739]: I0121 15:31:02.002349 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 15:31:02 crc kubenswrapper[4739]: I0121 15:31:02.060127 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 15:31:02 crc kubenswrapper[4739]: I0121 15:31:02.316660 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 15:31:02 crc kubenswrapper[4739]: I0121 15:31:02.784152 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.138079 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.474898 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.474931 4739 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3a18e0b4c2845ebaec2de431862425d50b9f57e91f87bd8529f9973fdb2f83b4" exitCode=137 Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.518354 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.518624 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.525125 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.589976 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.590396 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.610497 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.610551 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.610573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.610975 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.611211 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.611424 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.611462 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.611480 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.611678 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.618569 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.712122 4739 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.712158 4739 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.712166 4739 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.712175 4739 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.712184 4739 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.854433 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.408783 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.467606 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.487355 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.487471 4739 scope.go:117] "RemoveContainer" containerID="3a18e0b4c2845ebaec2de431862425d50b9f57e91f87bd8529f9973fdb2f83b4" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.487664 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.793182 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.793494 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.809889 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.809948 4739 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="05b23e0e-96a6-4415-9cd5-309ad7d9673d" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.818135 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.818189 4739 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="05b23e0e-96a6-4415-9cd5-309ad7d9673d" Jan 21 15:31:05 crc kubenswrapper[4739]: I0121 15:31:05.122515 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 15:31:08 crc kubenswrapper[4739]: I0121 15:31:08.632530 4739 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.633357 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.635096 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.635134 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c" exitCode=137 Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.635163 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c"} Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.635189 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2a479218e9959991e80ff06a8c115ef778b56c2adbf7d2ec94f95e72fd4e3cb4"} Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.635203 4739 scope.go:117] "RemoveContainer" containerID="d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c" Jan 21 15:31:29 crc kubenswrapper[4739]: I0121 15:31:29.641530 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 21 15:31:31 crc kubenswrapper[4739]: I0121 15:31:31.770332 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:32 crc kubenswrapper[4739]: I0121 15:31:32.349493 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:31:32 crc kubenswrapper[4739]: I0121 15:31:32.548984 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:31:32 crc kubenswrapper[4739]: I0121 15:31:32.549378 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rv98n" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="registry-server" containerID="cri-o://d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d" gracePeriod=2 Jan 21 15:31:32 crc kubenswrapper[4739]: I0121 15:31:32.660726 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vwv56" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="registry-server" containerID="cri-o://f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813" gracePeriod=2 Jan 21 15:31:32 crc kubenswrapper[4739]: I0121 15:31:32.906264 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.011476 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content\") pod \"fdd79051-71bc-4353-a426-f4a86fe4de42\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.011552 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gkvh\" (UniqueName: \"kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh\") pod \"fdd79051-71bc-4353-a426-f4a86fe4de42\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.011588 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities\") pod \"fdd79051-71bc-4353-a426-f4a86fe4de42\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.012436 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities" (OuterVolumeSpecName: "utilities") pod "fdd79051-71bc-4353-a426-f4a86fe4de42" (UID: "fdd79051-71bc-4353-a426-f4a86fe4de42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.023899 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh" (OuterVolumeSpecName: "kube-api-access-5gkvh") pod "fdd79051-71bc-4353-a426-f4a86fe4de42" (UID: "fdd79051-71bc-4353-a426-f4a86fe4de42"). InnerVolumeSpecName "kube-api-access-5gkvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.047067 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.055574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdd79051-71bc-4353-a426-f4a86fe4de42" (UID: "fdd79051-71bc-4353-a426-f4a86fe4de42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.113271 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.113305 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gkvh\" (UniqueName: \"kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.113317 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.214122 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content\") pod \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.214202 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities\") pod \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.214297 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2pd4\" (UniqueName: \"kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4\") pod \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.215623 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities" (OuterVolumeSpecName: "utilities") pod "3f24f8c8-f70f-47a4-998b-72b7ba0875cb" (UID: "3f24f8c8-f70f-47a4-998b-72b7ba0875cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.217952 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4" (OuterVolumeSpecName: "kube-api-access-s2pd4") pod "3f24f8c8-f70f-47a4-998b-72b7ba0875cb" (UID: "3f24f8c8-f70f-47a4-998b-72b7ba0875cb"). InnerVolumeSpecName "kube-api-access-s2pd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.267880 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f24f8c8-f70f-47a4-998b-72b7ba0875cb" (UID: "3f24f8c8-f70f-47a4-998b-72b7ba0875cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.316232 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2pd4\" (UniqueName: \"kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.316292 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.316307 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.669906 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerID="d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d" exitCode=0 Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.669989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerDied","Data":"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d"} Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.670035 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.670712 4739 scope.go:117] "RemoveContainer" containerID="d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.670645 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerDied","Data":"35c59b7a17a024e316d93c0ddc28b0f3ad5d3ed108d5a24d6ca60b8f080c2d86"} Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.673579 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerID="f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813" exitCode=0 Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.673612 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerDied","Data":"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813"} Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.673636 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerDied","Data":"8a9663b236e38b60bd5d612e28718624dcba862dff16d6f69798b2a18a2a92ac"} Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.673691 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.694071 4739 scope.go:117] "RemoveContainer" containerID="e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.708037 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.712339 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.724577 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.728838 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.729115 4739 scope.go:117] "RemoveContainer" containerID="acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.753082 4739 scope.go:117] "RemoveContainer" containerID="d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.753475 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d\": container with ID starting with d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d not found: ID does not exist" containerID="d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.753516 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d"} err="failed to get container status \"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d\": rpc error: code = NotFound desc = could not find container \"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d\": container with ID starting with d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d not found: ID does not exist" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.753548 4739 scope.go:117] "RemoveContainer" containerID="e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.753798 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d\": container with ID starting with e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d not found: ID does not exist" containerID="e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.753844 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d"} err="failed to get container status \"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d\": rpc error: code = NotFound desc = could not find container \"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d\": container with ID starting with e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d not found: ID does not exist" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.753862 4739 scope.go:117] "RemoveContainer" containerID="acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.754113 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c\": container with ID starting with acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c not found: ID does not exist" containerID="acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.754154 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c"} err="failed to get container status \"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c\": rpc error: code = NotFound desc = could not find container \"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c\": container with ID starting with acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c not found: ID does not exist" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.754176 4739 scope.go:117] "RemoveContainer" containerID="f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.787023 4739 scope.go:117] "RemoveContainer" containerID="30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.826191 4739 scope.go:117] "RemoveContainer" containerID="7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.852949 4739 scope.go:117] "RemoveContainer" containerID="f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.853622 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813\": container with ID starting with f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813 not found: ID does not exist" containerID="f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.853715 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813"} err="failed to get container status \"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813\": rpc error: code = NotFound desc = could not find container \"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813\": container with ID starting with f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813 not found: ID does not exist" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.853759 4739 scope.go:117] "RemoveContainer" containerID="30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.855633 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5\": container with ID starting with 30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5 not found: ID does not exist" containerID="30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.855673 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5"} err="failed to get container status \"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5\": rpc error: code = NotFound desc = could not find container \"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5\": container with ID starting with 30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5 not found: ID does not exist" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.855702 4739 scope.go:117] "RemoveContainer" containerID="7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.856175 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396\": container with ID starting with 7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396 not found: ID does not exist" containerID="7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.856252 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396"} err="failed to get container status \"7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396\": rpc error: code = NotFound desc = could not find container \"7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396\": container with ID starting with 7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396 not found: ID does not exist" Jan 21 15:31:34 crc kubenswrapper[4739]: I0121 15:31:34.792117 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" path="/var/lib/kubelet/pods/3f24f8c8-f70f-47a4-998b-72b7ba0875cb/volumes" Jan 21 15:31:34 crc kubenswrapper[4739]: I0121 15:31:34.793501 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" path="/var/lib/kubelet/pods/fdd79051-71bc-4353-a426-f4a86fe4de42/volumes" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.150029 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.150741 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kdd9z" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="registry-server" containerID="cri-o://e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b" gracePeriod=2 Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.552351 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.645122 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content\") pod \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.645274 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6wj4\" (UniqueName: \"kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4\") pod \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.645307 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities\") pod \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.649615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities" (OuterVolumeSpecName: "utilities") pod "47ff9f0e-8d35-4492-a0f4-6b7b580afa21" (UID: "47ff9f0e-8d35-4492-a0f4-6b7b580afa21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.656439 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4" (OuterVolumeSpecName: "kube-api-access-m6wj4") pod "47ff9f0e-8d35-4492-a0f4-6b7b580afa21" (UID: "47ff9f0e-8d35-4492-a0f4-6b7b580afa21"). InnerVolumeSpecName "kube-api-access-m6wj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.691331 4739 generic.go:334] "Generic (PLEG): container finished" podID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerID="e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b" exitCode=0 Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.691611 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerDied","Data":"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b"} Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.691637 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerDied","Data":"8ba79c9d61bcfeac0a269e7655d837a83fd2729f207c3cf49a1f21c91afb909b"} Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.691655 4739 scope.go:117] "RemoveContainer" containerID="e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.691770 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.710498 4739 scope.go:117] "RemoveContainer" containerID="d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.730955 4739 scope.go:117] "RemoveContainer" containerID="eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.744915 4739 scope.go:117] "RemoveContainer" containerID="e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b" Jan 21 15:31:35 crc kubenswrapper[4739]: E0121 15:31:35.746627 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b\": container with ID starting with e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b not found: ID does not exist" containerID="e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.746668 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b"} err="failed to get container status \"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b\": rpc error: code = NotFound desc = could not find container \"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b\": container with ID starting with e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b not found: ID does not exist" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.746694 4739 scope.go:117] "RemoveContainer" containerID="d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43" Jan 21 15:31:35 crc kubenswrapper[4739]: E0121 15:31:35.747117 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43\": container with ID starting with d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43 not found: ID does not exist" containerID="d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.747139 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43"} err="failed to get container status \"d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43\": rpc error: code = NotFound desc = could not find container \"d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43\": container with ID starting with d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43 not found: ID does not exist" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.747153 4739 scope.go:117] "RemoveContainer" containerID="eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433" Jan 21 15:31:35 crc kubenswrapper[4739]: E0121 15:31:35.747447 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433\": container with ID starting with eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433 not found: ID does not exist" containerID="eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.747472 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433"} err="failed to get container status \"eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433\": rpc error: code = NotFound desc = could not find container \"eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433\": container with ID starting with eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433 not found: ID does not exist" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.748840 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6wj4\" (UniqueName: \"kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.748872 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.788701 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47ff9f0e-8d35-4492-a0f4-6b7b580afa21" (UID: "47ff9f0e-8d35-4492-a0f4-6b7b580afa21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.850218 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:36 crc kubenswrapper[4739]: I0121 15:31:36.019697 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:31:36 crc kubenswrapper[4739]: I0121 15:31:36.024141 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:31:36 crc kubenswrapper[4739]: I0121 15:31:36.795567 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" path="/var/lib/kubelet/pods/47ff9f0e-8d35-4492-a0f4-6b7b580afa21/volumes" Jan 21 15:31:37 crc kubenswrapper[4739]: I0121 15:31:37.674473 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:37 crc kubenswrapper[4739]: I0121 15:31:37.679732 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:41 crc kubenswrapper[4739]: I0121 15:31:41.773940 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.510984 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.511857 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerName="route-controller-manager" containerID="cri-o://03b3a307c9f7c3be1cecfbcceef163690da8ba26787d4d0059149c1fb749cd73" gracePeriod=30 Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.515999 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.516285 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" podUID="dbf3570d-9cd6-4e26-bb55-023b935f9615" containerName="controller-manager" containerID="cri-o://354f62e5fa1035512b9a0102ab0e4ab2c22d3de280542d0cdca1941aa0faf681" gracePeriod=30 Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.782647 4739 generic.go:334] "Generic (PLEG): container finished" podID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerID="03b3a307c9f7c3be1cecfbcceef163690da8ba26787d4d0059149c1fb749cd73" exitCode=0 Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.786991 4739 generic.go:334] "Generic (PLEG): container finished" podID="dbf3570d-9cd6-4e26-bb55-023b935f9615" containerID="354f62e5fa1035512b9a0102ab0e4ab2c22d3de280542d0cdca1941aa0faf681" exitCode=0 Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.788980 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" event={"ID":"8a227bd1-9590-4abe-9b62-3e3dc7b537c1","Type":"ContainerDied","Data":"03b3a307c9f7c3be1cecfbcceef163690da8ba26787d4d0059149c1fb749cd73"} Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.789029 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" event={"ID":"dbf3570d-9cd6-4e26-bb55-023b935f9615","Type":"ContainerDied","Data":"354f62e5fa1035512b9a0102ab0e4ab2c22d3de280542d0cdca1941aa0faf681"} Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.968273 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.973067 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.142112 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca\") pod \"dbf3570d-9cd6-4e26-bb55-023b935f9615\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.142155 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt2bh\" (UniqueName: \"kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh\") pod \"dbf3570d-9cd6-4e26-bb55-023b935f9615\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.142198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert\") pod \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.142234 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwc5b\" (UniqueName: \"kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b\") pod \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143074 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca" (OuterVolumeSpecName: "client-ca") pod "dbf3570d-9cd6-4e26-bb55-023b935f9615" (UID: "dbf3570d-9cd6-4e26-bb55-023b935f9615"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143294 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert\") pod \"dbf3570d-9cd6-4e26-bb55-023b935f9615\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143643 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config\") pod \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143662 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles\") pod \"dbf3570d-9cd6-4e26-bb55-023b935f9615\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143691 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca\") pod \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143738 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config\") pod \"dbf3570d-9cd6-4e26-bb55-023b935f9615\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.144022 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.144468 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config" (OuterVolumeSpecName: "config") pod "dbf3570d-9cd6-4e26-bb55-023b935f9615" (UID: "dbf3570d-9cd6-4e26-bb55-023b935f9615"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.144642 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dbf3570d-9cd6-4e26-bb55-023b935f9615" (UID: "dbf3570d-9cd6-4e26-bb55-023b935f9615"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.144876 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config" (OuterVolumeSpecName: "config") pod "8a227bd1-9590-4abe-9b62-3e3dc7b537c1" (UID: "8a227bd1-9590-4abe-9b62-3e3dc7b537c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.149990 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca" (OuterVolumeSpecName: "client-ca") pod "8a227bd1-9590-4abe-9b62-3e3dc7b537c1" (UID: "8a227bd1-9590-4abe-9b62-3e3dc7b537c1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.151595 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh" (OuterVolumeSpecName: "kube-api-access-zt2bh") pod "dbf3570d-9cd6-4e26-bb55-023b935f9615" (UID: "dbf3570d-9cd6-4e26-bb55-023b935f9615"). InnerVolumeSpecName "kube-api-access-zt2bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.152374 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8a227bd1-9590-4abe-9b62-3e3dc7b537c1" (UID: "8a227bd1-9590-4abe-9b62-3e3dc7b537c1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.158380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b" (OuterVolumeSpecName: "kube-api-access-mwc5b") pod "8a227bd1-9590-4abe-9b62-3e3dc7b537c1" (UID: "8a227bd1-9590-4abe-9b62-3e3dc7b537c1"). InnerVolumeSpecName "kube-api-access-mwc5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.158541 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dbf3570d-9cd6-4e26-bb55-023b935f9615" (UID: "dbf3570d-9cd6-4e26-bb55-023b935f9615"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245333 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245389 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt2bh\" (UniqueName: \"kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245402 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245413 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwc5b\" (UniqueName: \"kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245422 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245430 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245438 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245446 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781122 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx"] Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781502 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781524 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781543 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781551 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781562 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781567 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781578 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerName="route-controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781584 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerName="route-controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781593 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf3570d-9cd6-4e26-bb55-023b935f9615" containerName="controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781599 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf3570d-9cd6-4e26-bb55-023b935f9615" containerName="controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781610 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781616 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781625 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781631 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781641 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781647 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781656 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781662 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781672 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781680 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781689 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781695 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781704 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781711 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781835 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf3570d-9cd6-4e26-bb55-023b935f9615" containerName="controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781849 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781860 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781870 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781880 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerName="route-controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781888 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.782560 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.787418 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.788798 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.795200 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" event={"ID":"dbf3570d-9cd6-4e26-bb55-023b935f9615","Type":"ContainerDied","Data":"034f44281583a7dffe346bb51465592a2bf0c22d0ea93d800d1143e06db6e1c3"} Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.795248 4739 scope.go:117] "RemoveContainer" containerID="354f62e5fa1035512b9a0102ab0e4ab2c22d3de280542d0cdca1941aa0faf681" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.795396 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.800923 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.801613 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" event={"ID":"8a227bd1-9590-4abe-9b62-3e3dc7b537c1","Type":"ContainerDied","Data":"e7f90a4a156c4791d43e50f63871bf0db885480b9b2d6f3074942567e4b12032"} Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.801736 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.821428 4739 scope.go:117] "RemoveContainer" containerID="03b3a307c9f7c3be1cecfbcceef163690da8ba26787d4d0059149c1fb749cd73" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.844651 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.882274 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.890910 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.899546 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.905346 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.953576 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-client-ca\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.953929 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01cc83e2-7bed-4429-8a77-390e56bbf855-serving-cert\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954065 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd7bt\" (UniqueName: \"kubernetes.io/projected/01cc83e2-7bed-4429-8a77-390e56bbf855-kube-api-access-rd7bt\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954188 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954308 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrwzq\" (UniqueName: \"kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954441 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-config\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954838 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.056355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-client-ca\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.056668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01cc83e2-7bed-4429-8a77-390e56bbf855-serving-cert\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.056775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd7bt\" (UniqueName: \"kubernetes.io/projected/01cc83e2-7bed-4429-8a77-390e56bbf855-kube-api-access-rd7bt\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.056906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057047 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrwzq\" (UniqueName: \"kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057139 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-config\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057373 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057475 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057318 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-client-ca\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.058661 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-config\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.058784 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.058788 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.063655 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01cc83e2-7bed-4429-8a77-390e56bbf855-serving-cert\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.069118 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.077019 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrwzq\" (UniqueName: \"kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.077570 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd7bt\" (UniqueName: \"kubernetes.io/projected/01cc83e2-7bed-4429-8a77-390e56bbf855-kube-api-access-rd7bt\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.100583 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.111736 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.363058 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx"] Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.656491 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:31:52 crc kubenswrapper[4739]: W0121 15:31:52.665428 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd49f0121_51e3_4cb3_b9f4_ae6087f38d00.slice/crio-297c4dace1cd4362cc3ae6763dc720e1cb81d22970e37e2b0b29c2917803a8af WatchSource:0}: Error finding container 297c4dace1cd4362cc3ae6763dc720e1cb81d22970e37e2b0b29c2917803a8af: Status 404 returned error can't find the container with id 297c4dace1cd4362cc3ae6763dc720e1cb81d22970e37e2b0b29c2917803a8af Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.795606 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" path="/var/lib/kubelet/pods/8a227bd1-9590-4abe-9b62-3e3dc7b537c1/volumes" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.796312 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf3570d-9cd6-4e26-bb55-023b935f9615" path="/var/lib/kubelet/pods/dbf3570d-9cd6-4e26-bb55-023b935f9615/volumes" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.811171 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" event={"ID":"d49f0121-51e3-4cb3-b9f4-ae6087f38d00","Type":"ContainerStarted","Data":"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d"} Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.811572 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" event={"ID":"d49f0121-51e3-4cb3-b9f4-ae6087f38d00","Type":"ContainerStarted","Data":"297c4dace1cd4362cc3ae6763dc720e1cb81d22970e37e2b0b29c2917803a8af"} Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.811986 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.813506 4739 patch_prober.go:28] interesting pod/controller-manager-855ffb57fb-sz6sh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.813574 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.817367 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" event={"ID":"01cc83e2-7bed-4429-8a77-390e56bbf855","Type":"ContainerStarted","Data":"f27d8d66a6c018610b6281cedc240fe49b85cbe60fed4d962b7c7dd24eac1587"} Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.817409 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" event={"ID":"01cc83e2-7bed-4429-8a77-390e56bbf855","Type":"ContainerStarted","Data":"b59f4fc8efd056861d68466d824cc6809685036d9ca4fb856d1f610293af6373"} Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.818457 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.852962 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" podStartSLOduration=2.852936986 podStartE2EDuration="2.852936986s" podCreationTimestamp="2026-01-21 15:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:31:52.834162216 +0000 UTC m=+344.524868480" watchObservedRunningTime="2026-01-21 15:31:52.852936986 +0000 UTC m=+344.543643240" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.931958 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.951990 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" podStartSLOduration=2.9519708099999997 podStartE2EDuration="2.95197081s" podCreationTimestamp="2026-01-21 15:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:31:52.852169265 +0000 UTC m=+344.542875549" watchObservedRunningTime="2026-01-21 15:31:52.95197081 +0000 UTC m=+344.642677074" Jan 21 15:31:53 crc kubenswrapper[4739]: I0121 15:31:53.830359 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.075055 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.075912 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerName="controller-manager" containerID="cri-o://3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d" gracePeriod=30 Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.222505 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.222569 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.753461 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.870782 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config\") pod \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.870866 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca\") pod \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.870923 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles\") pod \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.870957 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrwzq\" (UniqueName: \"kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq\") pod \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.871032 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert\") pod \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.871638 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca" (OuterVolumeSpecName: "client-ca") pod "d49f0121-51e3-4cb3-b9f4-ae6087f38d00" (UID: "d49f0121-51e3-4cb3-b9f4-ae6087f38d00"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.871649 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d49f0121-51e3-4cb3-b9f4-ae6087f38d00" (UID: "d49f0121-51e3-4cb3-b9f4-ae6087f38d00"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.872127 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config" (OuterVolumeSpecName: "config") pod "d49f0121-51e3-4cb3-b9f4-ae6087f38d00" (UID: "d49f0121-51e3-4cb3-b9f4-ae6087f38d00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.876547 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq" (OuterVolumeSpecName: "kube-api-access-xrwzq") pod "d49f0121-51e3-4cb3-b9f4-ae6087f38d00" (UID: "d49f0121-51e3-4cb3-b9f4-ae6087f38d00"). InnerVolumeSpecName "kube-api-access-xrwzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.879944 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d49f0121-51e3-4cb3-b9f4-ae6087f38d00" (UID: "d49f0121-51e3-4cb3-b9f4-ae6087f38d00"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.896984 4739 generic.go:334] "Generic (PLEG): container finished" podID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerID="3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d" exitCode=0 Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.897023 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.897049 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" event={"ID":"d49f0121-51e3-4cb3-b9f4-ae6087f38d00","Type":"ContainerDied","Data":"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d"} Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.897096 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" event={"ID":"d49f0121-51e3-4cb3-b9f4-ae6087f38d00","Type":"ContainerDied","Data":"297c4dace1cd4362cc3ae6763dc720e1cb81d22970e37e2b0b29c2917803a8af"} Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.897116 4739 scope.go:117] "RemoveContainer" containerID="3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.915185 4739 scope.go:117] "RemoveContainer" containerID="3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d" Jan 21 15:32:05 crc kubenswrapper[4739]: E0121 15:32:05.915617 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d\": container with ID starting with 3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d not found: ID does not exist" containerID="3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.915703 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d"} err="failed to get container status \"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d\": rpc error: code = NotFound desc = could not find container \"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d\": container with ID starting with 3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d not found: ID does not exist" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.927876 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.929897 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.971940 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.971980 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.971998 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.972012 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrwzq\" (UniqueName: \"kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.972021 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.796595 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" path="/var/lib/kubelet/pods/d49f0121-51e3-4cb3-b9f4-ae6087f38d00/volumes" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.798219 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-587464d68c-dggjn"] Jan 21 15:32:06 crc kubenswrapper[4739]: E0121 15:32:06.798451 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerName="controller-manager" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.798532 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerName="controller-manager" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.798688 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerName="controller-manager" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.799154 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.803446 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.805729 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-587464d68c-dggjn"] Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.803754 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.803831 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.803904 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.803931 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.806395 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.809268 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.883172 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-666r9\" (UniqueName: \"kubernetes.io/projected/efe44aa5-049f-4323-8df8-d08d3456d2fd-kube-api-access-666r9\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.883698 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-client-ca\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.883854 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-proxy-ca-bundles\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.883983 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-config\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.884162 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efe44aa5-049f-4323-8df8-d08d3456d2fd-serving-cert\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.985485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efe44aa5-049f-4323-8df8-d08d3456d2fd-serving-cert\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.985571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-666r9\" (UniqueName: \"kubernetes.io/projected/efe44aa5-049f-4323-8df8-d08d3456d2fd-kube-api-access-666r9\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.985620 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-client-ca\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.985648 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-proxy-ca-bundles\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.985671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-config\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.987206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-config\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.988451 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-client-ca\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.989371 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-proxy-ca-bundles\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.995313 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efe44aa5-049f-4323-8df8-d08d3456d2fd-serving-cert\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.004659 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-666r9\" (UniqueName: \"kubernetes.io/projected/efe44aa5-049f-4323-8df8-d08d3456d2fd-kube-api-access-666r9\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.134501 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.348804 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-587464d68c-dggjn"] Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.909233 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" event={"ID":"efe44aa5-049f-4323-8df8-d08d3456d2fd","Type":"ContainerStarted","Data":"668d9cd4f983999e5401608e3c2b2667cad632c7c93d945786308dfaac82fe76"} Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.909639 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.909652 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" event={"ID":"efe44aa5-049f-4323-8df8-d08d3456d2fd","Type":"ContainerStarted","Data":"6df9863c5502281b2089048380405f6f2a0050127d2b0d40bd99efbfc4bfff6d"} Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.913575 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.929126 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" podStartSLOduration=2.929111781 podStartE2EDuration="2.929111781s" podCreationTimestamp="2026-01-21 15:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:32:07.927151048 +0000 UTC m=+359.617857312" watchObservedRunningTime="2026-01-21 15:32:07.929111781 +0000 UTC m=+359.619818035" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.335152 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.337199 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4sr9g" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="registry-server" containerID="cri-o://08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f" gracePeriod=30 Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.351305 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.351556 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-27hq7" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="registry-server" containerID="cri-o://1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b" gracePeriod=30 Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.370850 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.371091 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" containerID="cri-o://48c4adfcda5ed3b2074a0713337352e71f9610f5fc4f64e3cdd6d5cdafb29426" gracePeriod=30 Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.380771 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.381056 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kk94c" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="registry-server" containerID="cri-o://a0779e7801d7bb86f5802cfcd1ec49b9ca54f15c1e2a86b44e121cdb3163ddc3" gracePeriod=30 Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.394085 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.394521 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t6phz" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="registry-server" containerID="cri-o://afd7c583a63895700341309c7930d237c4b1a03b697795f277da8caadca1b899" gracePeriod=30 Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.408085 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28ff6"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.409348 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.421045 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28ff6"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.483179 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.483231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmxkl\" (UniqueName: \"kubernetes.io/projected/f61fadad-2760-4a0f-8f1c-58598416d39a-kube-api-access-gmxkl\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.483274 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.585532 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.585597 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmxkl\" (UniqueName: \"kubernetes.io/projected/f61fadad-2760-4a0f-8f1c-58598416d39a-kube-api-access-gmxkl\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.585642 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.587412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.598220 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.610259 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmxkl\" (UniqueName: \"kubernetes.io/projected/f61fadad-2760-4a0f-8f1c-58598416d39a-kube-api-access-gmxkl\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.713806 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.959363 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.967555 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.033330 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities\") pod \"d5239161-d375-4078-8cbf-95219376f756\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.033385 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content\") pod \"db025233-2eca-4500-9e3c-67610f3f7a37\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.033418 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content\") pod \"d5239161-d375-4078-8cbf-95219376f756\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.037986 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr9tt\" (UniqueName: \"kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt\") pod \"db025233-2eca-4500-9e3c-67610f3f7a37\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.038023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2v47\" (UniqueName: \"kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47\") pod \"d5239161-d375-4078-8cbf-95219376f756\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.038094 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities\") pod \"db025233-2eca-4500-9e3c-67610f3f7a37\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.039029 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities" (OuterVolumeSpecName: "utilities") pod "db025233-2eca-4500-9e3c-67610f3f7a37" (UID: "db025233-2eca-4500-9e3c-67610f3f7a37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.049503 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5239161-d375-4078-8cbf-95219376f756" containerID="1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.050298 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerDied","Data":"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.050328 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerDied","Data":"80f37abb660ca7973267f6b03eb2b00ab62858a4ef5d1dbd02c60af6327d0edf"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.050351 4739 scope.go:117] "RemoveContainer" containerID="1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.050508 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.052241 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt" (OuterVolumeSpecName: "kube-api-access-fr9tt") pod "db025233-2eca-4500-9e3c-67610f3f7a37" (UID: "db025233-2eca-4500-9e3c-67610f3f7a37"). InnerVolumeSpecName "kube-api-access-fr9tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.052411 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47" (OuterVolumeSpecName: "kube-api-access-r2v47") pod "d5239161-d375-4078-8cbf-95219376f756" (UID: "d5239161-d375-4078-8cbf-95219376f756"). InnerVolumeSpecName "kube-api-access-r2v47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.058213 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities" (OuterVolumeSpecName: "utilities") pod "d5239161-d375-4078-8cbf-95219376f756" (UID: "d5239161-d375-4078-8cbf-95219376f756"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.081204 4739 scope.go:117] "RemoveContainer" containerID="351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.081454 4739 generic.go:334] "Generic (PLEG): container finished" podID="db025233-2eca-4500-9e3c-67610f3f7a37" containerID="08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.081515 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerDied","Data":"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.081543 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerDied","Data":"cc670b96dead1450a562f21a646f9e5f756fd0a05781547fb1510f02ab348006"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.081689 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.117456 4739 generic.go:334] "Generic (PLEG): container finished" podID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerID="48c4adfcda5ed3b2074a0713337352e71f9610f5fc4f64e3cdd6d5cdafb29426" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.117588 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" event={"ID":"b8e31058-907a-4b13-938f-8e2ec989ca0b","Type":"ContainerDied","Data":"48c4adfcda5ed3b2074a0713337352e71f9610f5fc4f64e3cdd6d5cdafb29426"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.122906 4739 generic.go:334] "Generic (PLEG): container finished" podID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerID="a0779e7801d7bb86f5802cfcd1ec49b9ca54f15c1e2a86b44e121cdb3163ddc3" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.122987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerDied","Data":"a0779e7801d7bb86f5802cfcd1ec49b9ca54f15c1e2a86b44e121cdb3163ddc3"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.128954 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db025233-2eca-4500-9e3c-67610f3f7a37" (UID: "db025233-2eca-4500-9e3c-67610f3f7a37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.135987 4739 scope.go:117] "RemoveContainer" containerID="d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.142710 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.142811 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.142897 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.142961 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr9tt\" (UniqueName: \"kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.143048 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2v47\" (UniqueName: \"kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.143974 4739 generic.go:334] "Generic (PLEG): container finished" podID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerID="afd7c583a63895700341309c7930d237c4b1a03b697795f277da8caadca1b899" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.144017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerDied","Data":"afd7c583a63895700341309c7930d237c4b1a03b697795f277da8caadca1b899"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.158190 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5239161-d375-4078-8cbf-95219376f756" (UID: "d5239161-d375-4078-8cbf-95219376f756"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.170365 4739 scope.go:117] "RemoveContainer" containerID="1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.170792 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b\": container with ID starting with 1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b not found: ID does not exist" containerID="1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.170841 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b"} err="failed to get container status \"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b\": rpc error: code = NotFound desc = could not find container \"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b\": container with ID starting with 1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.170861 4739 scope.go:117] "RemoveContainer" containerID="351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.171160 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319\": container with ID starting with 351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319 not found: ID does not exist" containerID="351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.171179 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319"} err="failed to get container status \"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319\": rpc error: code = NotFound desc = could not find container \"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319\": container with ID starting with 351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319 not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.171190 4739 scope.go:117] "RemoveContainer" containerID="d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.171486 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422\": container with ID starting with d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422 not found: ID does not exist" containerID="d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.171578 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422"} err="failed to get container status \"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422\": rpc error: code = NotFound desc = could not find container \"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422\": container with ID starting with d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422 not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.171659 4739 scope.go:117] "RemoveContainer" containerID="08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.172577 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.178948 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.189063 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.191633 4739 scope.go:117] "RemoveContainer" containerID="3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.214256 4739 scope.go:117] "RemoveContainer" containerID="d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.231582 4739 scope.go:117] "RemoveContainer" containerID="08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.232349 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f\": container with ID starting with 08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f not found: ID does not exist" containerID="08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.232399 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f"} err="failed to get container status \"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f\": rpc error: code = NotFound desc = could not find container \"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f\": container with ID starting with 08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.232436 4739 scope.go:117] "RemoveContainer" containerID="3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.233058 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4\": container with ID starting with 3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4 not found: ID does not exist" containerID="3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.233091 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4"} err="failed to get container status \"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4\": rpc error: code = NotFound desc = could not find container \"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4\": container with ID starting with 3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4 not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.233111 4739 scope.go:117] "RemoveContainer" containerID="d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.233355 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961\": container with ID starting with d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961 not found: ID does not exist" containerID="d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.233398 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961"} err="failed to get container status \"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961\": rpc error: code = NotFound desc = could not find container \"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961\": container with ID starting with d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961 not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.243921 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5fwc\" (UniqueName: \"kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc\") pod \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.243965 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content\") pod \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.244652 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities\") pod \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.244686 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content\") pod \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.244808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities\") pod \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.244892 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2lnw\" (UniqueName: \"kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw\") pod \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.245209 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.247763 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc" (OuterVolumeSpecName: "kube-api-access-b5fwc") pod "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" (UID: "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb"). InnerVolumeSpecName "kube-api-access-b5fwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.249589 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities" (OuterVolumeSpecName: "utilities") pod "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" (UID: "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.251177 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities" (OuterVolumeSpecName: "utilities") pod "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" (UID: "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.263344 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw" (OuterVolumeSpecName: "kube-api-access-n2lnw") pod "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" (UID: "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb"). InnerVolumeSpecName "kube-api-access-n2lnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.269690 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" (UID: "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.330606 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28ff6"] Jan 21 15:32:23 crc kubenswrapper[4739]: W0121 15:32:23.337599 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf61fadad_2760_4a0f_8f1c_58598416d39a.slice/crio-7cb21c215e4e34a1c9b87dbd0fe2772a141922ebe266d4d317a33fae0d8d07cb WatchSource:0}: Error finding container 7cb21c215e4e34a1c9b87dbd0fe2772a141922ebe266d4d317a33fae0d8d07cb: Status 404 returned error can't find the container with id 7cb21c215e4e34a1c9b87dbd0fe2772a141922ebe266d4d317a33fae0d8d07cb Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347201 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics\") pod \"b8e31058-907a-4b13-938f-8e2ec989ca0b\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347314 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs5tr\" (UniqueName: \"kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr\") pod \"b8e31058-907a-4b13-938f-8e2ec989ca0b\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347348 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca\") pod \"b8e31058-907a-4b13-938f-8e2ec989ca0b\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347512 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2lnw\" (UniqueName: \"kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347525 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5fwc\" (UniqueName: \"kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347534 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347545 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347553 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.348267 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b8e31058-907a-4b13-938f-8e2ec989ca0b" (UID: "b8e31058-907a-4b13-938f-8e2ec989ca0b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.351997 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b8e31058-907a-4b13-938f-8e2ec989ca0b" (UID: "b8e31058-907a-4b13-938f-8e2ec989ca0b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.352555 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr" (OuterVolumeSpecName: "kube-api-access-zs5tr") pod "b8e31058-907a-4b13-938f-8e2ec989ca0b" (UID: "b8e31058-907a-4b13-938f-8e2ec989ca0b"). InnerVolumeSpecName "kube-api-access-zs5tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.381463 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.383871 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.386546 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" (UID: "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.427077 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.430790 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.449020 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.449052 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.449061 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs5tr\" (UniqueName: \"kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.449069 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.163390 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" event={"ID":"f61fadad-2760-4a0f-8f1c-58598416d39a","Type":"ContainerStarted","Data":"54b31c4ebe8c3e0f611be93e99f517b3828525988611a928ea5c54cae1960aab"} Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.163464 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" event={"ID":"f61fadad-2760-4a0f-8f1c-58598416d39a","Type":"ContainerStarted","Data":"7cb21c215e4e34a1c9b87dbd0fe2772a141922ebe266d4d317a33fae0d8d07cb"} Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.163715 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.166707 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.168522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" event={"ID":"b8e31058-907a-4b13-938f-8e2ec989ca0b","Type":"ContainerDied","Data":"a312274d61cdfef373903e83e3a79f8e6217d316bd6726cff1386794baa06eb2"} Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.168566 4739 scope.go:117] "RemoveContainer" containerID="48c4adfcda5ed3b2074a0713337352e71f9610f5fc4f64e3cdd6d5cdafb29426" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.168577 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.173254 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerDied","Data":"353a2791208f5853a1241541e270354e4fc453c8d0c53deec17482b7d7512a0d"} Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.173338 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.188482 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerDied","Data":"0ff96cbaaff2209979db14735415e92278e9af5295f5d7422450da587e74592e"} Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.188593 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.197738 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" podStartSLOduration=2.197717192 podStartE2EDuration="2.197717192s" podCreationTimestamp="2026-01-21 15:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:32:24.183089316 +0000 UTC m=+375.873795630" watchObservedRunningTime="2026-01-21 15:32:24.197717192 +0000 UTC m=+375.888423456" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.201973 4739 scope.go:117] "RemoveContainer" containerID="a0779e7801d7bb86f5802cfcd1ec49b9ca54f15c1e2a86b44e121cdb3163ddc3" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.241809 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.246671 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.248341 4739 scope.go:117] "RemoveContainer" containerID="f6a2a63f31b53d68b2ba0527a1835c9d937f1429902017b62ede865cd8236d80" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.257247 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.276352 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.277788 4739 scope.go:117] "RemoveContainer" containerID="a4e08ee4d926be7b601171c8e6c10c31fe7ed602595664cb1120197a5812c75c" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.289851 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.337705 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.349015 4739 scope.go:117] "RemoveContainer" containerID="afd7c583a63895700341309c7930d237c4b1a03b697795f277da8caadca1b899" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.396313 4739 scope.go:117] "RemoveContainer" containerID="238b4964e5378b09424a9074a18cf629295f29f20c74d61d94fe2a47c148abb0" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.416805 4739 scope.go:117] "RemoveContainer" containerID="335d7f0f722f24d3def4e523e73292f4d06c20270508d0dacdeeb282c6de3299" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545073 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s5s9m"] Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545294 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545305 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545312 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545318 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545325 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545332 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545344 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545350 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545359 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545365 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545377 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545382 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545389 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545395 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545403 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545408 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545415 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545421 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545427 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545433 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545441 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545447 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545456 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545462 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545472 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545477 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545552 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545561 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545569 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545579 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545589 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.546230 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.548215 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.564578 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5s9m"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.663324 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-catalog-content\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.663386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-utilities\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.663472 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghz9w\" (UniqueName: \"kubernetes.io/projected/67b842e6-f082-4d40-8e57-620003b6cc52-kube-api-access-ghz9w\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.745423 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2phqw"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.746318 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.747731 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.758884 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2phqw"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.765347 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghz9w\" (UniqueName: \"kubernetes.io/projected/67b842e6-f082-4d40-8e57-620003b6cc52-kube-api-access-ghz9w\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.765400 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-catalog-content\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.765424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-utilities\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.765887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-utilities\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.766327 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-catalog-content\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.789219 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" path="/var/lib/kubelet/pods/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb/volumes" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.789793 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" path="/var/lib/kubelet/pods/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb/volumes" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.790412 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" path="/var/lib/kubelet/pods/b8e31058-907a-4b13-938f-8e2ec989ca0b/volumes" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.791296 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5239161-d375-4078-8cbf-95219376f756" path="/var/lib/kubelet/pods/d5239161-d375-4078-8cbf-95219376f756/volumes" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.792093 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" path="/var/lib/kubelet/pods/db025233-2eca-4500-9e3c-67610f3f7a37/volumes" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.796695 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghz9w\" (UniqueName: \"kubernetes.io/projected/67b842e6-f082-4d40-8e57-620003b6cc52-kube-api-access-ghz9w\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.866894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-utilities\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.867003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p2dk\" (UniqueName: \"kubernetes.io/projected/730d76de-628a-49ea-ad88-87a719e76750-kube-api-access-5p2dk\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.867040 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-catalog-content\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.867838 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.967926 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p2dk\" (UniqueName: \"kubernetes.io/projected/730d76de-628a-49ea-ad88-87a719e76750-kube-api-access-5p2dk\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.968468 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-catalog-content\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.968514 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-utilities\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.969328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-utilities\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.970934 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-catalog-content\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.998850 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p2dk\" (UniqueName: \"kubernetes.io/projected/730d76de-628a-49ea-ad88-87a719e76750-kube-api-access-5p2dk\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:25 crc kubenswrapper[4739]: I0121 15:32:25.071205 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:25 crc kubenswrapper[4739]: I0121 15:32:25.306991 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5s9m"] Jan 21 15:32:25 crc kubenswrapper[4739]: W0121 15:32:25.311314 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67b842e6_f082_4d40_8e57_620003b6cc52.slice/crio-9fd45f14b14c75276be5221948c7dada76ba2fec81b633e3f72fdf515d30a1a0 WatchSource:0}: Error finding container 9fd45f14b14c75276be5221948c7dada76ba2fec81b633e3f72fdf515d30a1a0: Status 404 returned error can't find the container with id 9fd45f14b14c75276be5221948c7dada76ba2fec81b633e3f72fdf515d30a1a0 Jan 21 15:32:25 crc kubenswrapper[4739]: I0121 15:32:25.542909 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2phqw"] Jan 21 15:32:25 crc kubenswrapper[4739]: W0121 15:32:25.546218 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod730d76de_628a_49ea_ad88_87a719e76750.slice/crio-2b846617d50f513cf7592003fc9ed130bc145f61ce3d592410b375316ad72825 WatchSource:0}: Error finding container 2b846617d50f513cf7592003fc9ed130bc145f61ce3d592410b375316ad72825: Status 404 returned error can't find the container with id 2b846617d50f513cf7592003fc9ed130bc145f61ce3d592410b375316ad72825 Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.224102 4739 generic.go:334] "Generic (PLEG): container finished" podID="67b842e6-f082-4d40-8e57-620003b6cc52" containerID="ee918080675ef2481a5221f7938905b806ca9452289b67f453d77a1e52d5a740" exitCode=0 Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.224209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5s9m" event={"ID":"67b842e6-f082-4d40-8e57-620003b6cc52","Type":"ContainerDied","Data":"ee918080675ef2481a5221f7938905b806ca9452289b67f453d77a1e52d5a740"} Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.224243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5s9m" event={"ID":"67b842e6-f082-4d40-8e57-620003b6cc52","Type":"ContainerStarted","Data":"9fd45f14b14c75276be5221948c7dada76ba2fec81b633e3f72fdf515d30a1a0"} Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.235486 4739 generic.go:334] "Generic (PLEG): container finished" podID="730d76de-628a-49ea-ad88-87a719e76750" containerID="f021e9873ed7b1e5c81d6ecb1e9a96266c7134218c879be0ccbffc34c5295835" exitCode=0 Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.236045 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2phqw" event={"ID":"730d76de-628a-49ea-ad88-87a719e76750","Type":"ContainerDied","Data":"f021e9873ed7b1e5c81d6ecb1e9a96266c7134218c879be0ccbffc34c5295835"} Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.236072 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2phqw" event={"ID":"730d76de-628a-49ea-ad88-87a719e76750","Type":"ContainerStarted","Data":"2b846617d50f513cf7592003fc9ed130bc145f61ce3d592410b375316ad72825"} Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.951416 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vpz9t"] Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.953077 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.957316 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.959470 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpz9t"] Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.993394 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-utilities\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.993458 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-catalog-content\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.993549 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65nzr\" (UniqueName: \"kubernetes.io/projected/87b35465-41de-46cd-acdb-53b8c6bace46-kube-api-access-65nzr\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.095487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-utilities\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.095566 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-catalog-content\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.095694 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65nzr\" (UniqueName: \"kubernetes.io/projected/87b35465-41de-46cd-acdb-53b8c6bace46-kube-api-access-65nzr\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.096309 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-catalog-content\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.096304 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-utilities\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.119944 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65nzr\" (UniqueName: \"kubernetes.io/projected/87b35465-41de-46cd-acdb-53b8c6bace46-kube-api-access-65nzr\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.148974 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mf97s"] Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.150157 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.152810 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.163496 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mf97s"] Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.197625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77j5k\" (UniqueName: \"kubernetes.io/projected/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-kube-api-access-77j5k\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.197704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-catalog-content\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.198128 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-utilities\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.283089 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.300356 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-utilities\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.300440 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77j5k\" (UniqueName: \"kubernetes.io/projected/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-kube-api-access-77j5k\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.300480 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-catalog-content\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.300970 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-catalog-content\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.301227 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-utilities\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.322167 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77j5k\" (UniqueName: \"kubernetes.io/projected/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-kube-api-access-77j5k\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.473220 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.689394 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpz9t"] Jan 21 15:32:27 crc kubenswrapper[4739]: W0121 15:32:27.693553 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87b35465_41de_46cd_acdb_53b8c6bace46.slice/crio-97157753e623e759541199092e1ad67bdea5e54ef7178a5e3c9e24677a5df841 WatchSource:0}: Error finding container 97157753e623e759541199092e1ad67bdea5e54ef7178a5e3c9e24677a5df841: Status 404 returned error can't find the container with id 97157753e623e759541199092e1ad67bdea5e54ef7178a5e3c9e24677a5df841 Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.863628 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mf97s"] Jan 21 15:32:27 crc kubenswrapper[4739]: W0121 15:32:27.897519 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37b1b410_e1bc_4ea1_88c0_d4ee6390214b.slice/crio-15e3056eff283e9f172ec5362a30ef77b639412dae1b604c3b6cfd9eebb35e36 WatchSource:0}: Error finding container 15e3056eff283e9f172ec5362a30ef77b639412dae1b604c3b6cfd9eebb35e36: Status 404 returned error can't find the container with id 15e3056eff283e9f172ec5362a30ef77b639412dae1b604c3b6cfd9eebb35e36 Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.247243 4739 generic.go:334] "Generic (PLEG): container finished" podID="37b1b410-e1bc-4ea1-88c0-d4ee6390214b" containerID="9e9b805d845b197b78638517b13e63779fe040c8811cfb4bd7f67bf796bc333d" exitCode=0 Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.247311 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf97s" event={"ID":"37b1b410-e1bc-4ea1-88c0-d4ee6390214b","Type":"ContainerDied","Data":"9e9b805d845b197b78638517b13e63779fe040c8811cfb4bd7f67bf796bc333d"} Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.247339 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf97s" event={"ID":"37b1b410-e1bc-4ea1-88c0-d4ee6390214b","Type":"ContainerStarted","Data":"15e3056eff283e9f172ec5362a30ef77b639412dae1b604c3b6cfd9eebb35e36"} Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.250607 4739 generic.go:334] "Generic (PLEG): container finished" podID="730d76de-628a-49ea-ad88-87a719e76750" containerID="da97d700f289333e1ed69f381db9b915437c0728a63c957b0583605935e668e2" exitCode=0 Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.250674 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2phqw" event={"ID":"730d76de-628a-49ea-ad88-87a719e76750","Type":"ContainerDied","Data":"da97d700f289333e1ed69f381db9b915437c0728a63c957b0583605935e668e2"} Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.254420 4739 generic.go:334] "Generic (PLEG): container finished" podID="87b35465-41de-46cd-acdb-53b8c6bace46" containerID="6eb509a26b842031c9262a07734c5d50a8ff43ce2b8e2d8e48187041fda2e3f2" exitCode=0 Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.254492 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpz9t" event={"ID":"87b35465-41de-46cd-acdb-53b8c6bace46","Type":"ContainerDied","Data":"6eb509a26b842031c9262a07734c5d50a8ff43ce2b8e2d8e48187041fda2e3f2"} Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.254522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpz9t" event={"ID":"87b35465-41de-46cd-acdb-53b8c6bace46","Type":"ContainerStarted","Data":"97157753e623e759541199092e1ad67bdea5e54ef7178a5e3c9e24677a5df841"} Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.257700 4739 generic.go:334] "Generic (PLEG): container finished" podID="67b842e6-f082-4d40-8e57-620003b6cc52" containerID="c10be53848ac67021a1e15a65e8676194fe7ea107cded637dea37706c3157cc4" exitCode=0 Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.257759 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5s9m" event={"ID":"67b842e6-f082-4d40-8e57-620003b6cc52","Type":"ContainerDied","Data":"c10be53848ac67021a1e15a65e8676194fe7ea107cded637dea37706c3157cc4"} Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.264349 4739 generic.go:334] "Generic (PLEG): container finished" podID="87b35465-41de-46cd-acdb-53b8c6bace46" containerID="4ba9b049fedfa7fdc1b6ebe78838dedc17fe3b5aae2b37c85fb965fa0f027145" exitCode=0 Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.264543 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpz9t" event={"ID":"87b35465-41de-46cd-acdb-53b8c6bace46","Type":"ContainerDied","Data":"4ba9b049fedfa7fdc1b6ebe78838dedc17fe3b5aae2b37c85fb965fa0f027145"} Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.270440 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5s9m" event={"ID":"67b842e6-f082-4d40-8e57-620003b6cc52","Type":"ContainerStarted","Data":"75a1b5f19a726ed639c320601b3ca890e36050abba45964f22e413540ec45b12"} Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.273518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf97s" event={"ID":"37b1b410-e1bc-4ea1-88c0-d4ee6390214b","Type":"ContainerStarted","Data":"902088c5349567109795f55444fce5cec2dba0bb453c486d0a55cb1763bdc8f6"} Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.276631 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2phqw" event={"ID":"730d76de-628a-49ea-ad88-87a719e76750","Type":"ContainerStarted","Data":"15323aea15ed7ac9f4012b06e602316c8f85f0a62e0d9c875ce9a4857d9df7cd"} Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.318657 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2phqw" podStartSLOduration=2.855173554 podStartE2EDuration="5.318633277s" podCreationTimestamp="2026-01-21 15:32:24 +0000 UTC" firstStartedPulling="2026-01-21 15:32:26.240514151 +0000 UTC m=+377.931220415" lastFinishedPulling="2026-01-21 15:32:28.703973874 +0000 UTC m=+380.394680138" observedRunningTime="2026-01-21 15:32:29.314412363 +0000 UTC m=+381.005118627" watchObservedRunningTime="2026-01-21 15:32:29.318633277 +0000 UTC m=+381.009339541" Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.358869 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s5s9m" podStartSLOduration=2.934131876 podStartE2EDuration="5.358850398s" podCreationTimestamp="2026-01-21 15:32:24 +0000 UTC" firstStartedPulling="2026-01-21 15:32:26.225606027 +0000 UTC m=+377.916312311" lastFinishedPulling="2026-01-21 15:32:28.650324579 +0000 UTC m=+380.341030833" observedRunningTime="2026-01-21 15:32:29.341658931 +0000 UTC m=+381.032365195" watchObservedRunningTime="2026-01-21 15:32:29.358850398 +0000 UTC m=+381.049556652" Jan 21 15:32:30 crc kubenswrapper[4739]: I0121 15:32:30.286521 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpz9t" event={"ID":"87b35465-41de-46cd-acdb-53b8c6bace46","Type":"ContainerStarted","Data":"a79a84f0f1301b99bb0c8b3a7e6a2556a3fc5a42b249a7e2cfed43be352a4cb4"} Jan 21 15:32:30 crc kubenswrapper[4739]: I0121 15:32:30.290104 4739 generic.go:334] "Generic (PLEG): container finished" podID="37b1b410-e1bc-4ea1-88c0-d4ee6390214b" containerID="902088c5349567109795f55444fce5cec2dba0bb453c486d0a55cb1763bdc8f6" exitCode=0 Jan 21 15:32:30 crc kubenswrapper[4739]: I0121 15:32:30.290152 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf97s" event={"ID":"37b1b410-e1bc-4ea1-88c0-d4ee6390214b","Type":"ContainerDied","Data":"902088c5349567109795f55444fce5cec2dba0bb453c486d0a55cb1763bdc8f6"} Jan 21 15:32:30 crc kubenswrapper[4739]: I0121 15:32:30.306299 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vpz9t" podStartSLOduration=2.6927271790000002 podStartE2EDuration="4.306281431s" podCreationTimestamp="2026-01-21 15:32:26 +0000 UTC" firstStartedPulling="2026-01-21 15:32:28.257052429 +0000 UTC m=+379.947758693" lastFinishedPulling="2026-01-21 15:32:29.870606681 +0000 UTC m=+381.561312945" observedRunningTime="2026-01-21 15:32:30.303250209 +0000 UTC m=+381.993956473" watchObservedRunningTime="2026-01-21 15:32:30.306281431 +0000 UTC m=+381.996987695" Jan 21 15:32:31 crc kubenswrapper[4739]: I0121 15:32:31.298012 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf97s" event={"ID":"37b1b410-e1bc-4ea1-88c0-d4ee6390214b","Type":"ContainerStarted","Data":"a6bef631fd727d5fdb62f02eaecfb78ef2faaeff6e69bf3924931caa57c11d89"} Jan 21 15:32:31 crc kubenswrapper[4739]: I0121 15:32:31.315539 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mf97s" podStartSLOduration=1.574069852 podStartE2EDuration="4.315521141s" podCreationTimestamp="2026-01-21 15:32:27 +0000 UTC" firstStartedPulling="2026-01-21 15:32:28.248485455 +0000 UTC m=+379.939191719" lastFinishedPulling="2026-01-21 15:32:30.989936744 +0000 UTC m=+382.680643008" observedRunningTime="2026-01-21 15:32:31.313332271 +0000 UTC m=+383.004038545" watchObservedRunningTime="2026-01-21 15:32:31.315521141 +0000 UTC m=+383.006227405" Jan 21 15:32:34 crc kubenswrapper[4739]: I0121 15:32:34.868928 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:34 crc kubenswrapper[4739]: I0121 15:32:34.869551 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:34 crc kubenswrapper[4739]: I0121 15:32:34.918469 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.071542 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.071846 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.115133 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.223003 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.223582 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.355934 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.366545 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.284278 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.285930 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.335190 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.376228 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.474547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.474610 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.516702 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:38 crc kubenswrapper[4739]: I0121 15:32:38.388920 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.223264 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.223804 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.223881 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.224336 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.224385 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459" gracePeriod=600 Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.524660 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459" exitCode=0 Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.525051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459"} Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.525089 4739 scope.go:117] "RemoveContainer" containerID="59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794" Jan 21 15:33:06 crc kubenswrapper[4739]: I0121 15:33:06.532588 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2"} Jan 21 15:35:05 crc kubenswrapper[4739]: I0121 15:35:05.223217 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:35:05 crc kubenswrapper[4739]: I0121 15:35:05.223782 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:35:35 crc kubenswrapper[4739]: I0121 15:35:35.222737 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:35:35 crc kubenswrapper[4739]: I0121 15:35:35.223354 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.222592 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.223138 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.223181 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.223701 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.223749 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2" gracePeriod=600 Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.697730 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2" exitCode=0 Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.697868 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2"} Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.698187 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5"} Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.698208 4739 scope.go:117] "RemoveContainer" containerID="0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.121769 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t5799"] Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.123243 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.144055 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t5799"] Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skt2\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-kube-api-access-9skt2\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280527 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280553 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280597 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-bound-sa-token\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280620 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-tls\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280640 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-certificates\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280686 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-trusted-ca\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.303109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382434 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382492 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-bound-sa-token\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382523 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-tls\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382544 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-certificates\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382600 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-trusted-ca\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382638 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9skt2\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-kube-api-access-9skt2\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382682 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.383258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.385422 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-certificates\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.385937 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-trusted-ca\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.390147 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-tls\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.390161 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.402977 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-bound-sa-token\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.403342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9skt2\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-kube-api-access-9skt2\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.438329 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.833118 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t5799"] Jan 21 15:37:01 crc kubenswrapper[4739]: I0121 15:37:01.024840 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" event={"ID":"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7","Type":"ContainerStarted","Data":"ffb3cb7ef24af4abbf8b5dc983b25ee6c64ff94778140036ecbdf5b50ab37e63"} Jan 21 15:37:01 crc kubenswrapper[4739]: I0121 15:37:01.025322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" event={"ID":"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7","Type":"ContainerStarted","Data":"2711eeab9dcfa9271a610a3e95c3a31d0e59ffc422f59573453a337cfaabeaa6"} Jan 21 15:37:01 crc kubenswrapper[4739]: I0121 15:37:01.025374 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:01 crc kubenswrapper[4739]: I0121 15:37:01.046787 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" podStartSLOduration=1.046760031 podStartE2EDuration="1.046760031s" podCreationTimestamp="2026-01-21 15:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:37:01.045052026 +0000 UTC m=+652.735758300" watchObservedRunningTime="2026-01-21 15:37:01.046760031 +0000 UTC m=+652.737466325" Jan 21 15:37:20 crc kubenswrapper[4739]: I0121 15:37:20.444097 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:20 crc kubenswrapper[4739]: I0121 15:37:20.501219 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.938711 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t"] Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.939877 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.944805 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hcwtd" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.944869 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.950836 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.951674 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-qtp84"] Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.952375 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qtp84" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.954321 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2ngl6" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.969284 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qtp84"] Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.980690 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-74xhs"] Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.981389 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.987739 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l69gm" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.999416 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-74xhs"] Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.023079 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t"] Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.026396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92gmf\" (UniqueName: \"kubernetes.io/projected/4ec8cb71-79f4-4c17-9519-94a7d2f5d25a-kube-api-access-92gmf\") pod \"cert-manager-webhook-687f57d79b-74xhs\" (UID: \"4ec8cb71-79f4-4c17-9519-94a7d2f5d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.026461 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6bh4\" (UniqueName: \"kubernetes.io/projected/796392e6-8151-400a-b817-4b844f2ec047-kube-api-access-v6bh4\") pod \"cert-manager-858654f9db-qtp84\" (UID: \"796392e6-8151-400a-b817-4b844f2ec047\") " pod="cert-manager/cert-manager-858654f9db-qtp84" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.026534 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkn8b\" (UniqueName: \"kubernetes.io/projected/7a61f406-e13a-4295-a1cc-2d9a0b9197eb-kube-api-access-qkn8b\") pod \"cert-manager-cainjector-cf98fcc89-6ch7t\" (UID: \"7a61f406-e13a-4295-a1cc-2d9a0b9197eb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.127001 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92gmf\" (UniqueName: \"kubernetes.io/projected/4ec8cb71-79f4-4c17-9519-94a7d2f5d25a-kube-api-access-92gmf\") pod \"cert-manager-webhook-687f57d79b-74xhs\" (UID: \"4ec8cb71-79f4-4c17-9519-94a7d2f5d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.127067 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6bh4\" (UniqueName: \"kubernetes.io/projected/796392e6-8151-400a-b817-4b844f2ec047-kube-api-access-v6bh4\") pod \"cert-manager-858654f9db-qtp84\" (UID: \"796392e6-8151-400a-b817-4b844f2ec047\") " pod="cert-manager/cert-manager-858654f9db-qtp84" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.127104 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkn8b\" (UniqueName: \"kubernetes.io/projected/7a61f406-e13a-4295-a1cc-2d9a0b9197eb-kube-api-access-qkn8b\") pod \"cert-manager-cainjector-cf98fcc89-6ch7t\" (UID: \"7a61f406-e13a-4295-a1cc-2d9a0b9197eb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.147082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92gmf\" (UniqueName: \"kubernetes.io/projected/4ec8cb71-79f4-4c17-9519-94a7d2f5d25a-kube-api-access-92gmf\") pod \"cert-manager-webhook-687f57d79b-74xhs\" (UID: \"4ec8cb71-79f4-4c17-9519-94a7d2f5d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.149974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6bh4\" (UniqueName: \"kubernetes.io/projected/796392e6-8151-400a-b817-4b844f2ec047-kube-api-access-v6bh4\") pod \"cert-manager-858654f9db-qtp84\" (UID: \"796392e6-8151-400a-b817-4b844f2ec047\") " pod="cert-manager/cert-manager-858654f9db-qtp84" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.160140 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkn8b\" (UniqueName: \"kubernetes.io/projected/7a61f406-e13a-4295-a1cc-2d9a0b9197eb-kube-api-access-qkn8b\") pod \"cert-manager-cainjector-cf98fcc89-6ch7t\" (UID: \"7a61f406-e13a-4295-a1cc-2d9a0b9197eb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.259926 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.271056 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qtp84" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.293574 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.588186 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-74xhs"] Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.595559 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.718122 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t"] Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.721663 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qtp84"] Jan 21 15:37:44 crc kubenswrapper[4739]: W0121 15:37:44.725617 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a61f406_e13a_4295_a1cc_2d9a0b9197eb.slice/crio-58a0a895297f33a10dd004f70340be9351f7840e83149e43b738a413e2fb32ee WatchSource:0}: Error finding container 58a0a895297f33a10dd004f70340be9351f7840e83149e43b738a413e2fb32ee: Status 404 returned error can't find the container with id 58a0a895297f33a10dd004f70340be9351f7840e83149e43b738a413e2fb32ee Jan 21 15:37:44 crc kubenswrapper[4739]: W0121 15:37:44.727699 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod796392e6_8151_400a_b817_4b844f2ec047.slice/crio-9dec3bcf84dcfcbcd128b589fd06ef1bdfedd0a9af4cf2e81c73c18226d7b79e WatchSource:0}: Error finding container 9dec3bcf84dcfcbcd128b589fd06ef1bdfedd0a9af4cf2e81c73c18226d7b79e: Status 404 returned error can't find the container with id 9dec3bcf84dcfcbcd128b589fd06ef1bdfedd0a9af4cf2e81c73c18226d7b79e Jan 21 15:37:45 crc kubenswrapper[4739]: I0121 15:37:45.518081 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" event={"ID":"7a61f406-e13a-4295-a1cc-2d9a0b9197eb","Type":"ContainerStarted","Data":"58a0a895297f33a10dd004f70340be9351f7840e83149e43b738a413e2fb32ee"} Jan 21 15:37:45 crc kubenswrapper[4739]: I0121 15:37:45.519094 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qtp84" event={"ID":"796392e6-8151-400a-b817-4b844f2ec047","Type":"ContainerStarted","Data":"9dec3bcf84dcfcbcd128b589fd06ef1bdfedd0a9af4cf2e81c73c18226d7b79e"} Jan 21 15:37:45 crc kubenswrapper[4739]: I0121 15:37:45.520159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" event={"ID":"4ec8cb71-79f4-4c17-9519-94a7d2f5d25a","Type":"ContainerStarted","Data":"e6e3f92aff0c69aadbc898b135e5c3e539dfb5996bfd0180aa893e4b6a7f30d1"} Jan 21 15:37:45 crc kubenswrapper[4739]: I0121 15:37:45.550775 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" podUID="0e76bbec-8e96-4589-bca2-78d151595ddf" containerName="registry" containerID="cri-o://7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432" gracePeriod=30 Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.438360 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498234 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498276 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498305 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498382 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498404 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498434 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgwjk\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498507 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.499535 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.500802 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.506516 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.510022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.511216 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk" (OuterVolumeSpecName: "kube-api-access-kgwjk") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "kube-api-access-kgwjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.512296 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.528040 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.528575 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.529462 4739 generic.go:334] "Generic (PLEG): container finished" podID="0e76bbec-8e96-4589-bca2-78d151595ddf" containerID="7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432" exitCode=0 Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.529500 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" event={"ID":"0e76bbec-8e96-4589-bca2-78d151595ddf","Type":"ContainerDied","Data":"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432"} Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.529547 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" event={"ID":"0e76bbec-8e96-4589-bca2-78d151595ddf","Type":"ContainerDied","Data":"9cb5f44f60dc865e24fcf1602e334dc1e620dffa67ad590a7f5a509f38063137"} Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.529564 4739 scope.go:117] "RemoveContainer" containerID="7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.529708 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.568294 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.571968 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599529 4739 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599562 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599574 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgwjk\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599582 4739 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599591 4739 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599600 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599612 4739 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.789846 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e76bbec-8e96-4589-bca2-78d151595ddf" path="/var/lib/kubelet/pods/0e76bbec-8e96-4589-bca2-78d151595ddf/volumes" Jan 21 15:37:48 crc kubenswrapper[4739]: I0121 15:37:48.417905 4739 scope.go:117] "RemoveContainer" containerID="7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432" Jan 21 15:37:48 crc kubenswrapper[4739]: E0121 15:37:48.418898 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432\": container with ID starting with 7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432 not found: ID does not exist" containerID="7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432" Jan 21 15:37:48 crc kubenswrapper[4739]: I0121 15:37:48.418933 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432"} err="failed to get container status \"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432\": rpc error: code = NotFound desc = could not find container \"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432\": container with ID starting with 7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432 not found: ID does not exist" Jan 21 15:37:53 crc kubenswrapper[4739]: I0121 15:37:53.582793 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qtp84" event={"ID":"796392e6-8151-400a-b817-4b844f2ec047","Type":"ContainerStarted","Data":"7310f265fa9136bc4d1afb97ded0153b812ac9a74ebd8fff72686edfc4432ec7"} Jan 21 15:37:53 crc kubenswrapper[4739]: I0121 15:37:53.587068 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" event={"ID":"7a61f406-e13a-4295-a1cc-2d9a0b9197eb","Type":"ContainerStarted","Data":"72bbd2b2dbaf046a4f15fe2d094cbe54a559f9bd87086c3139e5b30513c140b8"} Jan 21 15:37:53 crc kubenswrapper[4739]: I0121 15:37:53.614557 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-qtp84" podStartSLOduration=2.188346949 podStartE2EDuration="10.614519479s" podCreationTimestamp="2026-01-21 15:37:43 +0000 UTC" firstStartedPulling="2026-01-21 15:37:44.731603353 +0000 UTC m=+696.422309617" lastFinishedPulling="2026-01-21 15:37:53.157775873 +0000 UTC m=+704.848482147" observedRunningTime="2026-01-21 15:37:53.598718205 +0000 UTC m=+705.289424499" watchObservedRunningTime="2026-01-21 15:37:53.614519479 +0000 UTC m=+705.305225743" Jan 21 15:37:53 crc kubenswrapper[4739]: I0121 15:37:53.621565 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" podStartSLOduration=2.239349316 podStartE2EDuration="10.621547387s" podCreationTimestamp="2026-01-21 15:37:43 +0000 UTC" firstStartedPulling="2026-01-21 15:37:44.728208881 +0000 UTC m=+696.418915145" lastFinishedPulling="2026-01-21 15:37:53.110406952 +0000 UTC m=+704.801113216" observedRunningTime="2026-01-21 15:37:53.611907238 +0000 UTC m=+705.302613492" watchObservedRunningTime="2026-01-21 15:37:53.621547387 +0000 UTC m=+705.312253651" Jan 21 15:37:55 crc kubenswrapper[4739]: I0121 15:37:55.599389 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" event={"ID":"4ec8cb71-79f4-4c17-9519-94a7d2f5d25a","Type":"ContainerStarted","Data":"1b06181ceafa5cab60dd999d8d12abce6ef9fa621e3c6c682d151606c0610c16"} Jan 21 15:37:55 crc kubenswrapper[4739]: I0121 15:37:55.599698 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:55 crc kubenswrapper[4739]: I0121 15:37:55.617130 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" podStartSLOduration=2.47914178 podStartE2EDuration="12.617112312s" podCreationTimestamp="2026-01-21 15:37:43 +0000 UTC" firstStartedPulling="2026-01-21 15:37:44.595317515 +0000 UTC m=+696.286023779" lastFinishedPulling="2026-01-21 15:37:54.733288037 +0000 UTC m=+706.423994311" observedRunningTime="2026-01-21 15:37:55.613369411 +0000 UTC m=+707.304075675" watchObservedRunningTime="2026-01-21 15:37:55.617112312 +0000 UTC m=+707.307818576" Jan 21 15:37:59 crc kubenswrapper[4739]: I0121 15:37:59.296988 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:38:05 crc kubenswrapper[4739]: I0121 15:38:05.222755 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:38:05 crc kubenswrapper[4739]: I0121 15:38:05.223202 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.348704 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4z5x"] Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.349517 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="nbdb" containerID="cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.349668 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.349806 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="northd" containerID="cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.349966 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="sbdb" containerID="cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.349999 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-node" containerID="cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.350023 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-acl-logging" containerID="cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.351695 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-controller" containerID="cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.387936 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" containerID="cri-o://37819e13f645c7f0f0412c6dba12fc37fc3f57ddc88bd6558fe06b57e6a1c752" gracePeriod=30 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.687093 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/2.log" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.687844 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/1.log" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.687877 4739 generic.go:334] "Generic (PLEG): container finished" podID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" containerID="a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520" exitCode=2 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.687944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerDied","Data":"a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.687982 4739 scope.go:117] "RemoveContainer" containerID="a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.688493 4739 scope.go:117] "RemoveContainer" containerID="a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520" Jan 21 15:38:09 crc kubenswrapper[4739]: E0121 15:38:09.688651 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mqkjd_openshift-multus(38471118-ae5e-4d28-87b8-c3a5c6cc5267)\"" pod="openshift-multus/multus-mqkjd" podUID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.693614 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/3.log" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.699404 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-acl-logging/0.log" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.700593 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-controller/0.log" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701106 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="37819e13f645c7f0f0412c6dba12fc37fc3f57ddc88bd6558fe06b57e6a1c752" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701130 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701140 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701168 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701176 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701182 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701188 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301" exitCode=143 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701194 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088" exitCode=143 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701197 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"37819e13f645c7f0f0412c6dba12fc37fc3f57ddc88bd6558fe06b57e6a1c752"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701262 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701272 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701295 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701315 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701325 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088"} Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.914917 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/3.log" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.918285 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-acl-logging/0.log" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.918734 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-controller/0.log" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.919366 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.922236 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981100 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbjrz"] Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981294 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kubecfg-setup" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981306 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kubecfg-setup" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981317 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981323 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981330 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-acl-logging" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981336 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-acl-logging" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981346 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-node" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981352 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-node" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981360 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981365 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981376 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981382 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981391 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981397 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981403 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="northd" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981410 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="northd" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981418 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e76bbec-8e96-4589-bca2-78d151595ddf" containerName="registry" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981424 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e76bbec-8e96-4589-bca2-78d151595ddf" containerName="registry" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981431 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981436 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981447 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="nbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981453 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="nbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981459 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="sbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981464 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="sbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981471 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981477 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981558 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-node" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981567 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981574 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="sbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981582 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981589 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981597 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="northd" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981605 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981611 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981618 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="nbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981626 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e76bbec-8e96-4589-bca2-78d151595ddf" containerName="registry" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981634 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-acl-logging" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981723 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981731 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981809 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981840 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.984487 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.043904 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.043983 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42sj7\" (UniqueName: \"kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044019 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044042 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044061 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044099 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044116 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044114 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044137 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044183 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044212 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044221 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044251 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044256 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044282 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash" (OuterVolumeSpecName: "host-slash") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044312 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044372 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044513 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044555 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044580 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044617 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044697 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044702 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044722 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044736 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044763 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044781 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044800 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045024 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045041 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045054 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045077 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-ovn\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045091 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log" (OuterVolumeSpecName: "node-log") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045112 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-etc-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045154 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-node-log\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045241 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket" (OuterVolumeSpecName: "log-socket") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044098 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045323 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045327 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-netns\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045410 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-var-lib-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045459 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-log-socket\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045508 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-netd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045528 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-script-lib\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-systemd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-env-overrides\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edee8f4f-60c3-431f-950c-452a9f284074-ovn-node-metrics-cert\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045687 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-bin\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045754 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-kubelet\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045782 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-slash\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045885 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-config\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045922 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045951 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgll\" (UniqueName: \"kubernetes.io/projected/edee8f4f-60c3-431f-950c-452a9f284074-kube-api-access-nlgll\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045984 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-systemd-units\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046145 4739 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046175 4739 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046189 4739 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046206 4739 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046220 4739 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046232 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046246 4739 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046258 4739 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046269 4739 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046281 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046292 4739 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046302 4739 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046313 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046325 4739 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046336 4739 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046347 4739 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046358 4739 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.065208 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.065543 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7" (OuterVolumeSpecName: "kube-api-access-42sj7") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "kube-api-access-42sj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.072211 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147383 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-etc-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-ovn\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147500 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-node-log\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147518 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-netns\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147535 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-var-lib-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147554 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147573 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-log-socket\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147588 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-netd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147608 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-systemd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-ovn\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147658 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-node-log\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147692 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-netns\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147713 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-var-lib-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147734 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147755 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-log-socket\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147775 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-netd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147795 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-systemd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147444 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-etc-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147627 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-script-lib\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147842 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-env-overrides\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edee8f4f-60c3-431f-950c-452a9f284074-ovn-node-metrics-cert\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147881 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-bin\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147904 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-slash\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147918 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-kubelet\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147942 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-config\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147960 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147975 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlgll\" (UniqueName: \"kubernetes.io/projected/edee8f4f-60c3-431f-950c-452a9f284074-kube-api-access-nlgll\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147989 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-systemd-units\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.148641 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.148764 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-kubelet\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.149109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-script-lib\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150619 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-config\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150805 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-systemd-units\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150908 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-slash\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150975 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-bin\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150987 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-env-overrides\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.151423 4739 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.151446 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42sj7\" (UniqueName: \"kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.151471 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.152566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edee8f4f-60c3-431f-950c-452a9f284074-ovn-node-metrics-cert\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.170995 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlgll\" (UniqueName: \"kubernetes.io/projected/edee8f4f-60c3-431f-950c-452a9f284074-kube-api-access-nlgll\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.301300 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: W0121 15:38:11.318272 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedee8f4f_60c3_431f_950c_452a9f284074.slice/crio-de91803309ffd60c0f087db088bea0d53f04ea9aa16fe804718d9f7d0922107c WatchSource:0}: Error finding container de91803309ffd60c0f087db088bea0d53f04ea9aa16fe804718d9f7d0922107c: Status 404 returned error can't find the container with id de91803309ffd60c0f087db088bea0d53f04ea9aa16fe804718d9f7d0922107c Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.716069 4739 generic.go:334] "Generic (PLEG): container finished" podID="edee8f4f-60c3-431f-950c-452a9f284074" containerID="0dbd1c035f1f75f27c548b78f6e051a9c961cdab36e5fda9d96122bfa213e101" exitCode=0 Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.716440 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerDied","Data":"0dbd1c035f1f75f27c548b78f6e051a9c961cdab36e5fda9d96122bfa213e101"} Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.716538 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"de91803309ffd60c0f087db088bea0d53f04ea9aa16fe804718d9f7d0922107c"} Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.719739 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/2.log" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.726789 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-acl-logging/0.log" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.735202 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-controller/0.log" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.739294 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"0aeeca19fcaed84c23a97affb5713825fb8fa16e6d2cae9b568c96f1ffdd5b82"} Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.739342 4739 scope.go:117] "RemoveContainer" containerID="37819e13f645c7f0f0412c6dba12fc37fc3f57ddc88bd6558fe06b57e6a1c752" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.739534 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.800887 4739 scope.go:117] "RemoveContainer" containerID="22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.863132 4739 scope.go:117] "RemoveContainer" containerID="09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.864582 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4z5x"] Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.878121 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4z5x"] Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.904010 4739 scope.go:117] "RemoveContainer" containerID="408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.934539 4739 scope.go:117] "RemoveContainer" containerID="e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.954036 4739 scope.go:117] "RemoveContainer" containerID="3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.968395 4739 scope.go:117] "RemoveContainer" containerID="f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.983337 4739 scope.go:117] "RemoveContainer" containerID="91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.998066 4739 scope.go:117] "RemoveContainer" containerID="c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a" Jan 21 15:38:12 crc kubenswrapper[4739]: I0121 15:38:12.791326 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" path="/var/lib/kubelet/pods/6f87893e-5b9c-4dde-8992-3a66997edced/volumes" Jan 21 15:38:15 crc kubenswrapper[4739]: I0121 15:38:15.765065 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"149602f7cf7f3dfc3bfd54548b3f7c13aae1edb0cbe97af0b9371a21715ef0bb"} Jan 21 15:38:17 crc kubenswrapper[4739]: I0121 15:38:17.779195 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"428441d2569c4acae3f54883ee6ac5cfd8cfff711dbdc7171c38e9871468360e"} Jan 21 15:38:18 crc kubenswrapper[4739]: I0121 15:38:18.789592 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"3ec923f15ffa021d0ead128923abb691d4f30b3ab7b93d882534cc3fbbef96d5"} Jan 21 15:38:18 crc kubenswrapper[4739]: I0121 15:38:18.789944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"f431bfbea0996b05396acfe7daa652c5dacb517680b52b287e35f76df8447065"} Jan 21 15:38:18 crc kubenswrapper[4739]: I0121 15:38:18.789960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"57869256fdc0ddb06ef4d50ef986d041863213eae71a5be837841fbeb9ea5559"} Jan 21 15:38:18 crc kubenswrapper[4739]: I0121 15:38:18.789972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"e86260ba2d75bcfd0178d8acdd3c5f0fd73b985c3717f58c9d679c713c92a7c6"} Jan 21 15:38:20 crc kubenswrapper[4739]: I0121 15:38:20.805243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"57b349d1f827d273778c7da001d1ff96292b0e109b386671e6374b2f69f72fff"} Jan 21 15:38:23 crc kubenswrapper[4739]: I0121 15:38:23.782544 4739 scope.go:117] "RemoveContainer" containerID="a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520" Jan 21 15:38:23 crc kubenswrapper[4739]: E0121 15:38:23.784097 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mqkjd_openshift-multus(38471118-ae5e-4d28-87b8-c3a5c6cc5267)\"" pod="openshift-multus/multus-mqkjd" podUID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.830508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"abd271a8df48d04f8fdba1d76a77f4d2b2d0c2673f9fc01a0e4809e71a5a8984"} Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.831268 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.831308 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.831321 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.857534 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.872857 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" podStartSLOduration=14.872841126 podStartE2EDuration="14.872841126s" podCreationTimestamp="2026-01-21 15:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:38:24.871755246 +0000 UTC m=+736.562461500" watchObservedRunningTime="2026-01-21 15:38:24.872841126 +0000 UTC m=+736.563547390" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.890522 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:35 crc kubenswrapper[4739]: I0121 15:38:35.222354 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:38:35 crc kubenswrapper[4739]: I0121 15:38:35.222945 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:38 crc kubenswrapper[4739]: I0121 15:38:38.786762 4739 scope.go:117] "RemoveContainer" containerID="a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520" Jan 21 15:38:39 crc kubenswrapper[4739]: I0121 15:38:39.913089 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/2.log" Jan 21 15:38:39 crc kubenswrapper[4739]: I0121 15:38:39.915731 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerStarted","Data":"47c71fa0fa5fb1d8d519509f438c5ea30640e890a65e1cb32846e0c2005d7935"} Jan 21 15:38:41 crc kubenswrapper[4739]: I0121 15:38:41.325798 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.087648 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq"] Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.089183 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.091546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.103672 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq"] Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.213667 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.213714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7s8s\" (UniqueName: \"kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.213754 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.315338 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.315391 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7s8s\" (UniqueName: \"kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.315437 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.316077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.316292 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.348774 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7s8s\" (UniqueName: \"kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.468190 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.690348 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq"] Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.981703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" event={"ID":"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a","Type":"ContainerStarted","Data":"3d4c0853edc3bb94b269591d5dc5f4b0310d02e1c9c6d7be60660254e6b24eb6"} Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.070154 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.072908 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.082749 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.138581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.138665 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86mz4\" (UniqueName: \"kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.138694 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.239382 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.239453 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86mz4\" (UniqueName: \"kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.239471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.240009 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.240032 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.274710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86mz4\" (UniqueName: \"kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.396902 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.856056 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.991388 4739 generic.go:334] "Generic (PLEG): container finished" podID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerID="95261349ecac2182f170c8984076055e70264cb72ea37e8f02d7e213f7f585b7" exitCode=0 Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.991455 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" event={"ID":"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a","Type":"ContainerDied","Data":"95261349ecac2182f170c8984076055e70264cb72ea37e8f02d7e213f7f585b7"} Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.992384 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerStarted","Data":"5b102253f388193a773c4e1a8f51eaf07efe95bb8b12715389809bfe49b85acd"} Jan 21 15:38:50 crc kubenswrapper[4739]: I0121 15:38:50.998215 4739 generic.go:334] "Generic (PLEG): container finished" podID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerID="de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b" exitCode=0 Jan 21 15:38:50 crc kubenswrapper[4739]: I0121 15:38:50.998476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerDied","Data":"de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b"} Jan 21 15:38:53 crc kubenswrapper[4739]: I0121 15:38:53.010807 4739 generic.go:334] "Generic (PLEG): container finished" podID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerID="5fac7e1d8ffa774dd121292bf2acba1644b644035371a3108f5b1810a8b0083c" exitCode=0 Jan 21 15:38:53 crc kubenswrapper[4739]: I0121 15:38:53.010857 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" event={"ID":"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a","Type":"ContainerDied","Data":"5fac7e1d8ffa774dd121292bf2acba1644b644035371a3108f5b1810a8b0083c"} Jan 21 15:38:53 crc kubenswrapper[4739]: I0121 15:38:53.014248 4739 generic.go:334] "Generic (PLEG): container finished" podID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerID="1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326" exitCode=0 Jan 21 15:38:53 crc kubenswrapper[4739]: I0121 15:38:53.014283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerDied","Data":"1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326"} Jan 21 15:38:54 crc kubenswrapper[4739]: I0121 15:38:54.022942 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerStarted","Data":"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53"} Jan 21 15:38:54 crc kubenswrapper[4739]: I0121 15:38:54.025996 4739 generic.go:334] "Generic (PLEG): container finished" podID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerID="008ef047fb4ecb8959a0becff6f03761b88a5cc69ded8177462802517703b06d" exitCode=0 Jan 21 15:38:54 crc kubenswrapper[4739]: I0121 15:38:54.026040 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" event={"ID":"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a","Type":"ContainerDied","Data":"008ef047fb4ecb8959a0becff6f03761b88a5cc69ded8177462802517703b06d"} Jan 21 15:38:54 crc kubenswrapper[4739]: I0121 15:38:54.050373 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2c8c" podStartSLOduration=2.6276090930000002 podStartE2EDuration="5.050350764s" podCreationTimestamp="2026-01-21 15:38:49 +0000 UTC" firstStartedPulling="2026-01-21 15:38:51.000182627 +0000 UTC m=+762.690888901" lastFinishedPulling="2026-01-21 15:38:53.422924318 +0000 UTC m=+765.113630572" observedRunningTime="2026-01-21 15:38:54.045606376 +0000 UTC m=+765.736312650" watchObservedRunningTime="2026-01-21 15:38:54.050350764 +0000 UTC m=+765.741057048" Jan 21 15:38:54 crc kubenswrapper[4739]: I0121 15:38:54.840241 4739 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.270244 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.312375 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util\") pod \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.312730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle\") pod \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.312808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7s8s\" (UniqueName: \"kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s\") pod \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.313718 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle" (OuterVolumeSpecName: "bundle") pod "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" (UID: "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.319993 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s" (OuterVolumeSpecName: "kube-api-access-h7s8s") pod "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" (UID: "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a"). InnerVolumeSpecName "kube-api-access-h7s8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.415359 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.415405 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7s8s\" (UniqueName: \"kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.433615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util" (OuterVolumeSpecName: "util") pod "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" (UID: "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.515988 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:56 crc kubenswrapper[4739]: I0121 15:38:56.038795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" event={"ID":"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a","Type":"ContainerDied","Data":"3d4c0853edc3bb94b269591d5dc5f4b0310d02e1c9c6d7be60660254e6b24eb6"} Jan 21 15:38:56 crc kubenswrapper[4739]: I0121 15:38:56.038855 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d4c0853edc3bb94b269591d5dc5f4b0310d02e1c9c6d7be60660254e6b24eb6" Jan 21 15:38:56 crc kubenswrapper[4739]: I0121 15:38:56.038863 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.640138 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hrngk"] Jan 21 15:38:58 crc kubenswrapper[4739]: E0121 15:38:58.641065 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="extract" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.641081 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="extract" Jan 21 15:38:58 crc kubenswrapper[4739]: E0121 15:38:58.641104 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="pull" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.641110 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="pull" Jan 21 15:38:58 crc kubenswrapper[4739]: E0121 15:38:58.641118 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="util" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.641125 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="util" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.641242 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="extract" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.641733 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.643834 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qvcx2" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.644056 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.646938 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.661379 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hrngk"] Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.756113 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjkb\" (UniqueName: \"kubernetes.io/projected/61c58953-6280-4a68-858f-056eed7e5c65-kube-api-access-jvjkb\") pod \"nmstate-operator-646758c888-hrngk\" (UID: \"61c58953-6280-4a68-858f-056eed7e5c65\") " pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.857097 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjkb\" (UniqueName: \"kubernetes.io/projected/61c58953-6280-4a68-858f-056eed7e5c65-kube-api-access-jvjkb\") pod \"nmstate-operator-646758c888-hrngk\" (UID: \"61c58953-6280-4a68-858f-056eed7e5c65\") " pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.880875 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjkb\" (UniqueName: \"kubernetes.io/projected/61c58953-6280-4a68-858f-056eed7e5c65-kube-api-access-jvjkb\") pod \"nmstate-operator-646758c888-hrngk\" (UID: \"61c58953-6280-4a68-858f-056eed7e5c65\") " pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.963062 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" Jan 21 15:38:59 crc kubenswrapper[4739]: I0121 15:38:59.172153 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hrngk"] Jan 21 15:38:59 crc kubenswrapper[4739]: I0121 15:38:59.397646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:59 crc kubenswrapper[4739]: I0121 15:38:59.397692 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:59 crc kubenswrapper[4739]: I0121 15:38:59.451984 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:39:00 crc kubenswrapper[4739]: I0121 15:39:00.075243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" event={"ID":"61c58953-6280-4a68-858f-056eed7e5c65","Type":"ContainerStarted","Data":"ae6ab4daa17b3f027f72993cdcb4d3c224281acd4b19720d4efe1c22084ba44f"} Jan 21 15:39:00 crc kubenswrapper[4739]: I0121 15:39:00.122409 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.061197 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.083346 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2c8c" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="registry-server" containerID="cri-o://05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53" gracePeriod=2 Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.418092 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.502591 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content\") pod \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.502669 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities\") pod \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.502709 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86mz4\" (UniqueName: \"kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4\") pod \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.503649 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities" (OuterVolumeSpecName: "utilities") pod "599b3bd7-0366-4658-a1e6-c52b4fee4d7d" (UID: "599b3bd7-0366-4658-a1e6-c52b4fee4d7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.510032 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4" (OuterVolumeSpecName: "kube-api-access-86mz4") pod "599b3bd7-0366-4658-a1e6-c52b4fee4d7d" (UID: "599b3bd7-0366-4658-a1e6-c52b4fee4d7d"). InnerVolumeSpecName "kube-api-access-86mz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.604341 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.604369 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86mz4\" (UniqueName: \"kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.818773 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "599b3bd7-0366-4658-a1e6-c52b4fee4d7d" (UID: "599b3bd7-0366-4658-a1e6-c52b4fee4d7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.908391 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.092536 4739 generic.go:334] "Generic (PLEG): container finished" podID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerID="05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53" exitCode=0 Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.092578 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerDied","Data":"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53"} Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.092615 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerDied","Data":"5b102253f388193a773c4e1a8f51eaf07efe95bb8b12715389809bfe49b85acd"} Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.092633 4739 scope.go:117] "RemoveContainer" containerID="05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.092671 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.112883 4739 scope.go:117] "RemoveContainer" containerID="1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.161418 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.164881 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.486793 4739 scope.go:117] "RemoveContainer" containerID="de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.504660 4739 scope.go:117] "RemoveContainer" containerID="05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53" Jan 21 15:39:03 crc kubenswrapper[4739]: E0121 15:39:03.505223 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53\": container with ID starting with 05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53 not found: ID does not exist" containerID="05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.505259 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53"} err="failed to get container status \"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53\": rpc error: code = NotFound desc = could not find container \"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53\": container with ID starting with 05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53 not found: ID does not exist" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.505282 4739 scope.go:117] "RemoveContainer" containerID="1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326" Jan 21 15:39:03 crc kubenswrapper[4739]: E0121 15:39:03.505660 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326\": container with ID starting with 1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326 not found: ID does not exist" containerID="1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.505722 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326"} err="failed to get container status \"1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326\": rpc error: code = NotFound desc = could not find container \"1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326\": container with ID starting with 1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326 not found: ID does not exist" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.505763 4739 scope.go:117] "RemoveContainer" containerID="de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b" Jan 21 15:39:03 crc kubenswrapper[4739]: E0121 15:39:03.506122 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b\": container with ID starting with de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b not found: ID does not exist" containerID="de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.506152 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b"} err="failed to get container status \"de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b\": rpc error: code = NotFound desc = could not find container \"de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b\": container with ID starting with de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b not found: ID does not exist" Jan 21 15:39:04 crc kubenswrapper[4739]: I0121 15:39:04.099184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" event={"ID":"61c58953-6280-4a68-858f-056eed7e5c65","Type":"ContainerStarted","Data":"3a1017fd2e33b43baa38d3464e05ab945c12c5197e57e1ade1de2965052fe759"} Jan 21 15:39:04 crc kubenswrapper[4739]: I0121 15:39:04.116175 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" podStartSLOduration=1.792468183 podStartE2EDuration="6.116156124s" podCreationTimestamp="2026-01-21 15:38:58 +0000 UTC" firstStartedPulling="2026-01-21 15:38:59.183407429 +0000 UTC m=+770.874113683" lastFinishedPulling="2026-01-21 15:39:03.50709536 +0000 UTC m=+775.197801624" observedRunningTime="2026-01-21 15:39:04.11192229 +0000 UTC m=+775.802628564" watchObservedRunningTime="2026-01-21 15:39:04.116156124 +0000 UTC m=+775.806862388" Jan 21 15:39:04 crc kubenswrapper[4739]: I0121 15:39:04.788878 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" path="/var/lib/kubelet/pods/599b3bd7-0366-4658-a1e6-c52b4fee4d7d/volumes" Jan 21 15:39:05 crc kubenswrapper[4739]: I0121 15:39:05.222546 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:39:05 crc kubenswrapper[4739]: I0121 15:39:05.222602 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:39:05 crc kubenswrapper[4739]: I0121 15:39:05.222642 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:39:05 crc kubenswrapper[4739]: I0121 15:39:05.223266 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:39:05 crc kubenswrapper[4739]: I0121 15:39:05.223333 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5" gracePeriod=600 Jan 21 15:39:06 crc kubenswrapper[4739]: I0121 15:39:06.115279 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5" exitCode=0 Jan 21 15:39:06 crc kubenswrapper[4739]: I0121 15:39:06.115361 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5"} Jan 21 15:39:06 crc kubenswrapper[4739]: I0121 15:39:06.115692 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29"} Jan 21 15:39:06 crc kubenswrapper[4739]: I0121 15:39:06.115723 4739 scope.go:117] "RemoveContainer" containerID="03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.563259 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c5lvk"] Jan 21 15:39:07 crc kubenswrapper[4739]: E0121 15:39:07.563704 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="registry-server" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.563718 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="registry-server" Jan 21 15:39:07 crc kubenswrapper[4739]: E0121 15:39:07.563738 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="extract-utilities" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.563746 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="extract-utilities" Jan 21 15:39:07 crc kubenswrapper[4739]: E0121 15:39:07.563767 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="extract-content" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.563774 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="extract-content" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.563903 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="registry-server" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.564444 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.570034 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9v5f6" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.593486 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.594272 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.597187 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c5lvk"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.600066 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.655335 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.668274 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mldk\" (UniqueName: \"kubernetes.io/projected/b3aa938f-7ab9-45d1-a29d-9e9132ddaf87-kube-api-access-5mldk\") pod \"nmstate-metrics-54757c584b-c5lvk\" (UID: \"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.668349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5zm\" (UniqueName: \"kubernetes.io/projected/5812c445-156f-48d3-aa24-130b329cccfe-kube-api-access-bb5zm\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.668371 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.676605 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-srg8z"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.677207 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5zm\" (UniqueName: \"kubernetes.io/projected/5812c445-156f-48d3-aa24-130b329cccfe-kube-api-access-bb5zm\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769302 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zxc\" (UniqueName: \"kubernetes.io/projected/9460d049-7edd-4e18-a153-2b0bc3218a8a-kube-api-access-r5zxc\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769330 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: E0121 15:39:07.769414 4739 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769416 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-nmstate-lock\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: E0121 15:39:07.769467 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair podName:5812c445-156f-48d3-aa24-130b329cccfe nodeName:}" failed. No retries permitted until 2026-01-21 15:39:08.269446845 +0000 UTC m=+779.960153109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-fdf2j" (UID: "5812c445-156f-48d3-aa24-130b329cccfe") : secret "openshift-nmstate-webhook" not found Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mldk\" (UniqueName: \"kubernetes.io/projected/b3aa938f-7ab9-45d1-a29d-9e9132ddaf87-kube-api-access-5mldk\") pod \"nmstate-metrics-54757c584b-c5lvk\" (UID: \"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769751 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-ovs-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769800 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-dbus-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.791767 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5zm\" (UniqueName: \"kubernetes.io/projected/5812c445-156f-48d3-aa24-130b329cccfe-kube-api-access-bb5zm\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.791809 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mldk\" (UniqueName: \"kubernetes.io/projected/b3aa938f-7ab9-45d1-a29d-9e9132ddaf87-kube-api-access-5mldk\") pod \"nmstate-metrics-54757c584b-c5lvk\" (UID: \"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.850502 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.851074 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.855279 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.855428 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-t5zpb" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.855604 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.859166 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.870695 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-nmstate-lock\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.870772 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-ovs-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.870798 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-dbus-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.870843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zxc\" (UniqueName: \"kubernetes.io/projected/9460d049-7edd-4e18-a153-2b0bc3218a8a-kube-api-access-r5zxc\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.871129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-nmstate-lock\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.871175 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-ovs-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.871374 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-dbus-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.881159 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.895692 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zxc\" (UniqueName: \"kubernetes.io/projected/9460d049-7edd-4e18-a153-2b0bc3218a8a-kube-api-access-r5zxc\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.971714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4v2m\" (UniqueName: \"kubernetes.io/projected/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-kube-api-access-m4v2m\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.972282 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.972325 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.991087 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.062484 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f9d58689-7z254"] Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.063236 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.077365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.077420 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.077473 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4v2m\" (UniqueName: \"kubernetes.io/projected/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-kube-api-access-m4v2m\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: E0121 15:39:08.077762 4739 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 21 15:39:08 crc kubenswrapper[4739]: E0121 15:39:08.077826 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert podName:d1e5428b-c7db-4df9-8fad-fcfa89827ea4 nodeName:}" failed. No retries permitted until 2026-01-21 15:39:08.577802041 +0000 UTC m=+780.268508305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-7nprl" (UID: "d1e5428b-c7db-4df9-8fad-fcfa89827ea4") : secret "plugin-serving-cert" not found Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.079495 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.081788 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f9d58689-7z254"] Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.138794 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4v2m\" (UniqueName: \"kubernetes.io/projected/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-kube-api-access-m4v2m\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.143842 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-srg8z" event={"ID":"9460d049-7edd-4e18-a153-2b0bc3218a8a","Type":"ContainerStarted","Data":"1ddb53479c16623189720d8b483e0f72ce71f4b961f3d1f31c9b5d7ffd76f73e"} Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202379 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-service-ca\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202686 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-console-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202724 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-oauth-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202771 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202789 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgpd\" (UniqueName: \"kubernetes.io/projected/53004a12-f1d2-4468-ac01-f00094e24d56-kube-api-access-mhgpd\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202808 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-trusted-ca-bundle\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202849 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-oauth-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.229582 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c5lvk"] Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.303994 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-oauth-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304109 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhgpd\" (UniqueName: \"kubernetes.io/projected/53004a12-f1d2-4468-ac01-f00094e24d56-kube-api-access-mhgpd\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304137 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-trusted-ca-bundle\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304171 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-oauth-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304208 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304229 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-service-ca\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304256 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-console-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: E0121 15:39:08.304445 4739 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 21 15:39:08 crc kubenswrapper[4739]: E0121 15:39:08.304520 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair podName:5812c445-156f-48d3-aa24-130b329cccfe nodeName:}" failed. No retries permitted until 2026-01-21 15:39:09.304502174 +0000 UTC m=+780.995208428 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-fdf2j" (UID: "5812c445-156f-48d3-aa24-130b329cccfe") : secret "openshift-nmstate-webhook" not found Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.305217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-console-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.305416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-oauth-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.305570 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-service-ca\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.305613 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-trusted-ca-bundle\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.308289 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.308505 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-oauth-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.321086 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhgpd\" (UniqueName: \"kubernetes.io/projected/53004a12-f1d2-4468-ac01-f00094e24d56-kube-api-access-mhgpd\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.451355 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.609023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.612583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.771998 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-t5zpb" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.780302 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.899536 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f9d58689-7z254"] Jan 21 15:39:08 crc kubenswrapper[4739]: W0121 15:39:08.906742 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53004a12_f1d2_4468_ac01_f00094e24d56.slice/crio-c9956cccd4723758de141b752d2b9cc248de9380675a6464554980d22b94a908 WatchSource:0}: Error finding container c9956cccd4723758de141b752d2b9cc248de9380675a6464554980d22b94a908: Status 404 returned error can't find the container with id c9956cccd4723758de141b752d2b9cc248de9380675a6464554980d22b94a908 Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.012166 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl"] Jan 21 15:39:09 crc kubenswrapper[4739]: W0121 15:39:09.014305 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1e5428b_c7db_4df9_8fad_fcfa89827ea4.slice/crio-83d3b7e5f85a60966ee93ab7cf05a2faaccd30b7883e6f5a9fd60919f5a01637 WatchSource:0}: Error finding container 83d3b7e5f85a60966ee93ab7cf05a2faaccd30b7883e6f5a9fd60919f5a01637: Status 404 returned error can't find the container with id 83d3b7e5f85a60966ee93ab7cf05a2faaccd30b7883e6f5a9fd60919f5a01637 Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.148805 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" event={"ID":"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87","Type":"ContainerStarted","Data":"61832ab98fc19c83eb2d6a58b98c395cfbf07176aaf9b2a21be9414d6d9ba405"} Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.150335 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" event={"ID":"d1e5428b-c7db-4df9-8fad-fcfa89827ea4","Type":"ContainerStarted","Data":"83d3b7e5f85a60966ee93ab7cf05a2faaccd30b7883e6f5a9fd60919f5a01637"} Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.151792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f9d58689-7z254" event={"ID":"53004a12-f1d2-4468-ac01-f00094e24d56","Type":"ContainerStarted","Data":"c9956cccd4723758de141b752d2b9cc248de9380675a6464554980d22b94a908"} Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.326195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.332978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.451890 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.629069 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j"] Jan 21 15:39:09 crc kubenswrapper[4739]: W0121 15:39:09.641030 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5812c445_156f_48d3_aa24_130b329cccfe.slice/crio-931c9b2177598b74883d6d0d0d8c77581b2087d9573dd79fec4405beae380d0c WatchSource:0}: Error finding container 931c9b2177598b74883d6d0d0d8c77581b2087d9573dd79fec4405beae380d0c: Status 404 returned error can't find the container with id 931c9b2177598b74883d6d0d0d8c77581b2087d9573dd79fec4405beae380d0c Jan 21 15:39:10 crc kubenswrapper[4739]: I0121 15:39:10.159739 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" event={"ID":"5812c445-156f-48d3-aa24-130b329cccfe","Type":"ContainerStarted","Data":"931c9b2177598b74883d6d0d0d8c77581b2087d9573dd79fec4405beae380d0c"} Jan 21 15:39:10 crc kubenswrapper[4739]: I0121 15:39:10.161944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f9d58689-7z254" event={"ID":"53004a12-f1d2-4468-ac01-f00094e24d56","Type":"ContainerStarted","Data":"0ad00ec468bc37df75e82f1e6220feaf823d3c2c7dfeb228bb4c7b1ea55a4d0e"} Jan 21 15:39:10 crc kubenswrapper[4739]: I0121 15:39:10.188373 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f9d58689-7z254" podStartSLOduration=2.1883488030000002 podStartE2EDuration="2.188348803s" podCreationTimestamp="2026-01-21 15:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:39:10.184395875 +0000 UTC m=+781.875102139" watchObservedRunningTime="2026-01-21 15:39:10.188348803 +0000 UTC m=+781.879055057" Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.192512 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" event={"ID":"d1e5428b-c7db-4df9-8fad-fcfa89827ea4","Type":"ContainerStarted","Data":"f13b2180a70212eb44b527e7dbe592fdae146946aed2338fca0a04801cd451a4"} Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.195791 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-srg8z" event={"ID":"9460d049-7edd-4e18-a153-2b0bc3218a8a","Type":"ContainerStarted","Data":"f93f96e92a55bf6bda325f50a3201643534c2b0f5c15cbc537ae0adefc3f5546"} Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.195860 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.198621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" event={"ID":"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87","Type":"ContainerStarted","Data":"43719f09246fa232c61032aeaee0aa47ac0c3466043213a37d2f50b6d0e547b5"} Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.200261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" event={"ID":"5812c445-156f-48d3-aa24-130b329cccfe","Type":"ContainerStarted","Data":"766cf868b27b5bfd6304ca5997596d2654096ef8d7839f748bcb756ce858b1ed"} Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.201103 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.237399 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-srg8z" podStartSLOduration=2.228404582 podStartE2EDuration="7.237377353s" podCreationTimestamp="2026-01-21 15:39:07 +0000 UTC" firstStartedPulling="2026-01-21 15:39:08.058347391 +0000 UTC m=+779.749053655" lastFinishedPulling="2026-01-21 15:39:13.067320162 +0000 UTC m=+784.758026426" observedRunningTime="2026-01-21 15:39:14.235697047 +0000 UTC m=+785.926403311" watchObservedRunningTime="2026-01-21 15:39:14.237377353 +0000 UTC m=+785.928083637" Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.241448 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" podStartSLOduration=3.214185358 podStartE2EDuration="7.241428714s" podCreationTimestamp="2026-01-21 15:39:07 +0000 UTC" firstStartedPulling="2026-01-21 15:39:09.021616392 +0000 UTC m=+780.712322656" lastFinishedPulling="2026-01-21 15:39:13.048859748 +0000 UTC m=+784.739566012" observedRunningTime="2026-01-21 15:39:14.21749609 +0000 UTC m=+785.908202374" watchObservedRunningTime="2026-01-21 15:39:14.241428714 +0000 UTC m=+785.932134988" Jan 21 15:39:18 crc kubenswrapper[4739]: I0121 15:39:18.451939 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:18 crc kubenswrapper[4739]: I0121 15:39:18.452437 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:18 crc kubenswrapper[4739]: I0121 15:39:18.455845 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:18 crc kubenswrapper[4739]: I0121 15:39:18.481409 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" podStartSLOduration=8.053528342 podStartE2EDuration="11.48138984s" podCreationTimestamp="2026-01-21 15:39:07 +0000 UTC" firstStartedPulling="2026-01-21 15:39:09.64373772 +0000 UTC m=+781.334443984" lastFinishedPulling="2026-01-21 15:39:13.071599218 +0000 UTC m=+784.762305482" observedRunningTime="2026-01-21 15:39:14.256461163 +0000 UTC m=+785.947167417" watchObservedRunningTime="2026-01-21 15:39:18.48138984 +0000 UTC m=+790.172096104" Jan 21 15:39:19 crc kubenswrapper[4739]: I0121 15:39:19.238117 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:19 crc kubenswrapper[4739]: I0121 15:39:19.294034 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:39:22 crc kubenswrapper[4739]: I0121 15:39:22.254065 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" event={"ID":"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87","Type":"ContainerStarted","Data":"35dfeceb90c3e99c3addff1978cd7ab8e7be1183df9b9c56f2cf6c3d1d15ab2d"} Jan 21 15:39:22 crc kubenswrapper[4739]: I0121 15:39:22.272780 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" podStartSLOduration=2.002756779 podStartE2EDuration="15.272764432s" podCreationTimestamp="2026-01-21 15:39:07 +0000 UTC" firstStartedPulling="2026-01-21 15:39:08.244653563 +0000 UTC m=+779.935359827" lastFinishedPulling="2026-01-21 15:39:21.514661216 +0000 UTC m=+793.205367480" observedRunningTime="2026-01-21 15:39:22.270011677 +0000 UTC m=+793.960717961" watchObservedRunningTime="2026-01-21 15:39:22.272764432 +0000 UTC m=+793.963470696" Jan 21 15:39:23 crc kubenswrapper[4739]: I0121 15:39:23.031743 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:29 crc kubenswrapper[4739]: I0121 15:39:29.459422 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.529494 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz"] Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.531050 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.532692 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.540096 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz"] Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.724593 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78l9m\" (UniqueName: \"kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.724665 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.724714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.825678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78l9m\" (UniqueName: \"kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.825765 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.825856 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.826218 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.826531 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.857709 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78l9m\" (UniqueName: \"kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:42 crc kubenswrapper[4739]: I0121 15:39:42.145029 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:42 crc kubenswrapper[4739]: I0121 15:39:42.569084 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz"] Jan 21 15:39:43 crc kubenswrapper[4739]: I0121 15:39:43.380311 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" event={"ID":"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e","Type":"ContainerStarted","Data":"fcb884de8e84f63447e549fa2670d79dc8d4cc9a9dc36d8e320a3b7e6cbb821b"} Jan 21 15:39:44 crc kubenswrapper[4739]: I0121 15:39:44.354355 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-b6f6r" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" containerID="cri-o://87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef" gracePeriod=15 Jan 21 15:39:44 crc kubenswrapper[4739]: I0121 15:39:44.388961 4739 generic.go:334] "Generic (PLEG): container finished" podID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerID="f8b45616c95cb9b9a9fc4113fa83e5a1f4587c17cb5f568bfd95032db6cd2cfe" exitCode=0 Jan 21 15:39:44 crc kubenswrapper[4739]: I0121 15:39:44.389171 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" event={"ID":"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e","Type":"ContainerDied","Data":"f8b45616c95cb9b9a9fc4113fa83e5a1f4587c17cb5f568bfd95032db6cd2cfe"} Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.009092 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b6f6r_bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74/console/0.log" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.009190 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105192 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105259 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzdkt\" (UniqueName: \"kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105295 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105330 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105363 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105385 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105427 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.107504 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca" (OuterVolumeSpecName: "service-ca") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.107608 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.107627 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config" (OuterVolumeSpecName: "console-config") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.108074 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.112382 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.113420 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.117125 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt" (OuterVolumeSpecName: "kube-api-access-hzdkt") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "kube-api-access-hzdkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.206676 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.206999 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.207009 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.207017 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.207026 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.207034 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.207042 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzdkt\" (UniqueName: \"kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395363 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b6f6r_bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74/console/0.log" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395404 4739 generic.go:334] "Generic (PLEG): container finished" podID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerID="87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef" exitCode=2 Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395431 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b6f6r" event={"ID":"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74","Type":"ContainerDied","Data":"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef"} Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395455 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b6f6r" event={"ID":"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74","Type":"ContainerDied","Data":"3a8882cf407b430ab843c7b0296458050aa0914b1f0016eaa92def189446dcfe"} Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395475 4739 scope.go:117] "RemoveContainer" containerID="87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395592 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.427305 4739 scope.go:117] "RemoveContainer" containerID="87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef" Jan 21 15:39:45 crc kubenswrapper[4739]: E0121 15:39:45.427987 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef\": container with ID starting with 87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef not found: ID does not exist" containerID="87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.428015 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef"} err="failed to get container status \"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef\": rpc error: code = NotFound desc = could not find container \"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef\": container with ID starting with 87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef not found: ID does not exist" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.434409 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.441422 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:39:46 crc kubenswrapper[4739]: I0121 15:39:46.404203 4739 generic.go:334] "Generic (PLEG): container finished" podID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerID="0acd53fb0f7a9785d7419067eba34faacbe07b2c21c71fab07190ae9e4ca3be6" exitCode=0 Jan 21 15:39:46 crc kubenswrapper[4739]: I0121 15:39:46.404272 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" event={"ID":"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e","Type":"ContainerDied","Data":"0acd53fb0f7a9785d7419067eba34faacbe07b2c21c71fab07190ae9e4ca3be6"} Jan 21 15:39:46 crc kubenswrapper[4739]: I0121 15:39:46.790755 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" path="/var/lib/kubelet/pods/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74/volumes" Jan 21 15:39:47 crc kubenswrapper[4739]: I0121 15:39:47.415193 4739 generic.go:334] "Generic (PLEG): container finished" podID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerID="8b674e715b8f691138037321ac74eb37972dba68ba752aeea6e6338ac7b8cdfc" exitCode=0 Jan 21 15:39:47 crc kubenswrapper[4739]: I0121 15:39:47.415320 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" event={"ID":"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e","Type":"ContainerDied","Data":"8b674e715b8f691138037321ac74eb37972dba68ba752aeea6e6338ac7b8cdfc"} Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.656504 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.855250 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle\") pod \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.856107 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util\") pod \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.856197 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78l9m\" (UniqueName: \"kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m\") pod \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.856683 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle" (OuterVolumeSpecName: "bundle") pod "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" (UID: "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.865922 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m" (OuterVolumeSpecName: "kube-api-access-78l9m") pod "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" (UID: "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e"). InnerVolumeSpecName "kube-api-access-78l9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.875299 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util" (OuterVolumeSpecName: "util") pod "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" (UID: "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.956754 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.956795 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78l9m\" (UniqueName: \"kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.956808 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:49 crc kubenswrapper[4739]: I0121 15:39:49.428863 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" event={"ID":"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e","Type":"ContainerDied","Data":"fcb884de8e84f63447e549fa2670d79dc8d4cc9a9dc36d8e320a3b7e6cbb821b"} Jan 21 15:39:49 crc kubenswrapper[4739]: I0121 15:39:49.429112 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb884de8e84f63447e549fa2670d79dc8d4cc9a9dc36d8e320a3b7e6cbb821b" Jan 21 15:39:49 crc kubenswrapper[4739]: I0121 15:39:49.428920 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583042 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl"] Jan 21 15:39:59 crc kubenswrapper[4739]: E0121 15:39:59.583692 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="util" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583703 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="util" Jan 21 15:39:59 crc kubenswrapper[4739]: E0121 15:39:59.583714 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="extract" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583720 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="extract" Jan 21 15:39:59 crc kubenswrapper[4739]: E0121 15:39:59.583728 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="pull" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583736 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="pull" Jan 21 15:39:59 crc kubenswrapper[4739]: E0121 15:39:59.583748 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583755 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583867 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583881 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="extract" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.584248 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.588061 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.588144 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.588577 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-g7lpv" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.588642 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.591976 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.611650 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl"] Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.686277 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-apiservice-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.686336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74s8v\" (UniqueName: \"kubernetes.io/projected/84c56862-84f8-419f-af8d-69c644199e10-kube-api-access-74s8v\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.686397 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-webhook-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.788168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-webhook-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.788598 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-apiservice-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.788700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74s8v\" (UniqueName: \"kubernetes.io/projected/84c56862-84f8-419f-af8d-69c644199e10-kube-api-access-74s8v\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.803603 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-webhook-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.810346 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-apiservice-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.814287 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74s8v\" (UniqueName: \"kubernetes.io/projected/84c56862-84f8-419f-af8d-69c644199e10-kube-api-access-74s8v\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.899131 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.218547 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6994698-z27sp"] Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.219395 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.227513 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.228504 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.236346 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nhqx4" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.243859 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6994698-z27sp"] Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.293894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-webhook-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.294114 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-apiservice-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.294271 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85cm\" (UniqueName: \"kubernetes.io/projected/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-kube-api-access-v85cm\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.396718 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-webhook-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.396865 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-apiservice-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.396958 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v85cm\" (UniqueName: \"kubernetes.io/projected/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-kube-api-access-v85cm\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.402457 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-webhook-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.406003 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-apiservice-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.410544 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl"] Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.416532 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v85cm\" (UniqueName: \"kubernetes.io/projected/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-kube-api-access-v85cm\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.485207 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" event={"ID":"84c56862-84f8-419f-af8d-69c644199e10","Type":"ContainerStarted","Data":"3338b9f4aa5c2bf38566c20c594514dcdec13c952b63f5256d040f8d6a6ee623"} Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.533545 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.970785 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6994698-z27sp"] Jan 21 15:40:00 crc kubenswrapper[4739]: W0121 15:40:00.975772 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef7118ff_ea20_40ec_aa4d_5711926f4b6c.slice/crio-4b03e58b770925839a1292326eab56db41300de58e7115330d55a9f5b8bbb5a6 WatchSource:0}: Error finding container 4b03e58b770925839a1292326eab56db41300de58e7115330d55a9f5b8bbb5a6: Status 404 returned error can't find the container with id 4b03e58b770925839a1292326eab56db41300de58e7115330d55a9f5b8bbb5a6 Jan 21 15:40:01 crc kubenswrapper[4739]: I0121 15:40:01.490861 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" event={"ID":"ef7118ff-ea20-40ec-aa4d-5711926f4b6c","Type":"ContainerStarted","Data":"4b03e58b770925839a1292326eab56db41300de58e7115330d55a9f5b8bbb5a6"} Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.545261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" event={"ID":"84c56862-84f8-419f-af8d-69c644199e10","Type":"ContainerStarted","Data":"81d32085a14dc8373fa03afc2e98364ac1e3a7c069e8d695285981b1da3af8d4"} Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.545912 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.546885 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" event={"ID":"ef7118ff-ea20-40ec-aa4d-5711926f4b6c","Type":"ContainerStarted","Data":"4c517c60a3bf2b4b9ccbc79010f06deca276b4d77c2d2ffd5d456b6fa465ec7d"} Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.547625 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.566590 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" podStartSLOduration=1.8682910019999999 podStartE2EDuration="8.566564995s" podCreationTimestamp="2026-01-21 15:39:59 +0000 UTC" firstStartedPulling="2026-01-21 15:40:00.406726423 +0000 UTC m=+832.097432687" lastFinishedPulling="2026-01-21 15:40:07.105000416 +0000 UTC m=+838.795706680" observedRunningTime="2026-01-21 15:40:07.564170609 +0000 UTC m=+839.254876873" watchObservedRunningTime="2026-01-21 15:40:07.566564995 +0000 UTC m=+839.257271259" Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.589369 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" podStartSLOduration=1.446455289 podStartE2EDuration="7.589349786s" podCreationTimestamp="2026-01-21 15:40:00 +0000 UTC" firstStartedPulling="2026-01-21 15:40:00.978171998 +0000 UTC m=+832.668878262" lastFinishedPulling="2026-01-21 15:40:07.121066495 +0000 UTC m=+838.811772759" observedRunningTime="2026-01-21 15:40:07.58360544 +0000 UTC m=+839.274311704" watchObservedRunningTime="2026-01-21 15:40:07.589349786 +0000 UTC m=+839.280056050" Jan 21 15:40:20 crc kubenswrapper[4739]: I0121 15:40:20.538254 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:39 crc kubenswrapper[4739]: I0121 15:40:39.904050 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.721371 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4cfnm"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.724433 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.728599 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.729252 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.731272 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.731441 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-q2nzx" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.731566 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.733181 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.745260 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.833283 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-hgxx6"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.834191 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hgxx6" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.842235 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-nq75j"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846070 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846291 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846359 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846398 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kpgsq" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846926 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-reloader\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzcv\" (UniqueName: \"kubernetes.io/projected/de79a4b1-6301-4c43-ae80-14834d2d7b54-kube-api-access-8kzcv\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846988 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-conf\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847008 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-startup\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847022 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847040 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw7d7\" (UniqueName: \"kubernetes.io/projected/df4966b4-eef0-46d7-a70b-f7108da36b36-kube-api-access-nw7d7\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847060 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-sockets\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847074 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics-certs\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847107 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847122 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.850345 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.856290 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nq75j"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948239 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-conf\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948284 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-startup\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948306 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cf8h\" (UniqueName: \"kubernetes.io/projected/58e065e3-180e-4e42-b5ae-7c4468d5f141-kube-api-access-8cf8h\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw7d7\" (UniqueName: \"kubernetes.io/projected/df4966b4-eef0-46d7-a70b-f7108da36b36-kube-api-access-nw7d7\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948381 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-sockets\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948412 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-metrics-certs\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948428 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics-certs\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948461 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948474 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948489 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tksb5\" (UniqueName: \"kubernetes.io/projected/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-kube-api-access-tksb5\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948513 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-reloader\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948544 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/58e065e3-180e-4e42-b5ae-7c4468d5f141-metallb-excludel2\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948567 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kzcv\" (UniqueName: \"kubernetes.io/projected/de79a4b1-6301-4c43-ae80-14834d2d7b54-kube-api-access-8kzcv\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948593 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-cert\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.949097 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-conf\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.949713 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-startup\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: E0121 15:40:40.949791 4739 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 21 15:40:40 crc kubenswrapper[4739]: E0121 15:40:40.949856 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert podName:df4966b4-eef0-46d7-a70b-f7108da36b36 nodeName:}" failed. No retries permitted until 2026-01-21 15:40:41.449839943 +0000 UTC m=+873.140546207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert") pod "frr-k8s-webhook-server-7df86c4f6c-sjv4j" (UID: "df4966b4-eef0-46d7-a70b-f7108da36b36") : secret "frr-k8s-webhook-server-cert" not found Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.950021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-reloader\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.950241 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.950503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-sockets\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.958648 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics-certs\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.979300 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kzcv\" (UniqueName: \"kubernetes.io/projected/de79a4b1-6301-4c43-ae80-14834d2d7b54-kube-api-access-8kzcv\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.981586 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw7d7\" (UniqueName: \"kubernetes.io/projected/df4966b4-eef0-46d7-a70b-f7108da36b36-kube-api-access-nw7d7\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.043896 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tksb5\" (UniqueName: \"kubernetes.io/projected/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-kube-api-access-tksb5\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050457 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/58e065e3-180e-4e42-b5ae-7c4468d5f141-metallb-excludel2\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050546 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-cert\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050631 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.050723 4739 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.050806 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist podName:58e065e3-180e-4e42-b5ae-7c4468d5f141 nodeName:}" failed. No retries permitted until 2026-01-21 15:40:41.550788198 +0000 UTC m=+873.241494472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist") pod "speaker-hgxx6" (UID: "58e065e3-180e-4e42-b5ae-7c4468d5f141") : secret "metallb-memberlist" not found Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050723 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cf8h\" (UniqueName: \"kubernetes.io/projected/58e065e3-180e-4e42-b5ae-7c4468d5f141-kube-api-access-8cf8h\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050954 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050997 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-metrics-certs\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.051076 4739 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.051115 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs podName:58e065e3-180e-4e42-b5ae-7c4468d5f141 nodeName:}" failed. No retries permitted until 2026-01-21 15:40:41.551105406 +0000 UTC m=+873.241811670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs") pod "speaker-hgxx6" (UID: "58e065e3-180e-4e42-b5ae-7c4468d5f141") : secret "speaker-certs-secret" not found Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.052054 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/58e065e3-180e-4e42-b5ae-7c4468d5f141-metallb-excludel2\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.055329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-metrics-certs\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.054945 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.064674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-cert\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.071325 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cf8h\" (UniqueName: \"kubernetes.io/projected/58e065e3-180e-4e42-b5ae-7c4468d5f141-kube-api-access-8cf8h\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.081644 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tksb5\" (UniqueName: \"kubernetes.io/projected/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-kube-api-access-tksb5\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.161271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.460943 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.465446 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.534123 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nq75j"] Jan 21 15:40:41 crc kubenswrapper[4739]: W0121 15:40:41.542431 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ed6441e_fd6c_45e1_8e0a_5b3e12ef029c.slice/crio-8e4ac33bd73827bd97519068ed7968342e4e6c45544e32c3d0923251f916077f WatchSource:0}: Error finding container 8e4ac33bd73827bd97519068ed7968342e4e6c45544e32c3d0923251f916077f: Status 404 returned error can't find the container with id 8e4ac33bd73827bd97519068ed7968342e4e6c45544e32c3d0923251f916077f Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.562447 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.562502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.562619 4739 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.562696 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist podName:58e065e3-180e-4e42-b5ae-7c4468d5f141 nodeName:}" failed. No retries permitted until 2026-01-21 15:40:42.562668269 +0000 UTC m=+874.253374533 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist") pod "speaker-hgxx6" (UID: "58e065e3-180e-4e42-b5ae-7c4468d5f141") : secret "metallb-memberlist" not found Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.565651 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.651644 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.741756 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nq75j" event={"ID":"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c","Type":"ContainerStarted","Data":"8e4ac33bd73827bd97519068ed7968342e4e6c45544e32c3d0923251f916077f"} Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.758141 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"55a56bfc3731242b6805a1b12acb9ab95fdb4491974ffaf7b15df0079577d50a"} Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.055945 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j"] Jan 21 15:40:42 crc kubenswrapper[4739]: W0121 15:40:42.059479 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf4966b4_eef0_46d7_a70b_f7108da36b36.slice/crio-143205480c60017f8a1d80732d5fa6885fb4783488f6e07e1fde34f6415c0525 WatchSource:0}: Error finding container 143205480c60017f8a1d80732d5fa6885fb4783488f6e07e1fde34f6415c0525: Status 404 returned error can't find the container with id 143205480c60017f8a1d80732d5fa6885fb4783488f6e07e1fde34f6415c0525 Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.575096 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.585392 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.656953 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hgxx6" Jan 21 15:40:42 crc kubenswrapper[4739]: W0121 15:40:42.685882 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58e065e3_180e_4e42_b5ae_7c4468d5f141.slice/crio-ebf93dc5e0e26ba478f6f10e5374ef26658d783e6bbddeb86dee5ef3778bc833 WatchSource:0}: Error finding container ebf93dc5e0e26ba478f6f10e5374ef26658d783e6bbddeb86dee5ef3778bc833: Status 404 returned error can't find the container with id ebf93dc5e0e26ba478f6f10e5374ef26658d783e6bbddeb86dee5ef3778bc833 Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.770713 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hgxx6" event={"ID":"58e065e3-180e-4e42-b5ae-7c4468d5f141","Type":"ContainerStarted","Data":"ebf93dc5e0e26ba478f6f10e5374ef26658d783e6bbddeb86dee5ef3778bc833"} Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.774794 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" event={"ID":"df4966b4-eef0-46d7-a70b-f7108da36b36","Type":"ContainerStarted","Data":"143205480c60017f8a1d80732d5fa6885fb4783488f6e07e1fde34f6415c0525"} Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.789735 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.789767 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nq75j" event={"ID":"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c","Type":"ContainerStarted","Data":"d782ec2b5745bc608e2394a989841e42bb0b8967ab3722fba99b22b9075128a7"} Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.789781 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nq75j" event={"ID":"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c","Type":"ContainerStarted","Data":"7db0e80e735fd801f78c3d9c31fc51509be2e3991d19dce090277c7a6ed64781"} Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.819641 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-nq75j" podStartSLOduration=2.819621308 podStartE2EDuration="2.819621308s" podCreationTimestamp="2026-01-21 15:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:40:42.8160046 +0000 UTC m=+874.506710874" watchObservedRunningTime="2026-01-21 15:40:42.819621308 +0000 UTC m=+874.510327572" Jan 21 15:40:43 crc kubenswrapper[4739]: I0121 15:40:43.799673 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hgxx6" event={"ID":"58e065e3-180e-4e42-b5ae-7c4468d5f141","Type":"ContainerStarted","Data":"a84e8d379b08d4cb5811031f5a255409973712fad30220efff68963e8ea29c9a"} Jan 21 15:40:43 crc kubenswrapper[4739]: I0121 15:40:43.799987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hgxx6" event={"ID":"58e065e3-180e-4e42-b5ae-7c4468d5f141","Type":"ContainerStarted","Data":"834ad4b73b4e00f49ab705bd46991a40eb68338d39221f1f481b813947fab61e"} Jan 21 15:40:43 crc kubenswrapper[4739]: I0121 15:40:43.822872 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-hgxx6" podStartSLOduration=3.822849351 podStartE2EDuration="3.822849351s" podCreationTimestamp="2026-01-21 15:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:40:43.819077238 +0000 UTC m=+875.509783502" watchObservedRunningTime="2026-01-21 15:40:43.822849351 +0000 UTC m=+875.513555625" Jan 21 15:40:44 crc kubenswrapper[4739]: I0121 15:40:44.809924 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-hgxx6" Jan 21 15:40:51 crc kubenswrapper[4739]: I0121 15:40:51.166304 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:54 crc kubenswrapper[4739]: I0121 15:40:54.889212 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" event={"ID":"df4966b4-eef0-46d7-a70b-f7108da36b36","Type":"ContainerStarted","Data":"1bc774774f016c8c825ed0752e3dce681e8ef0808c620dbc7d1ccdf6be8baf62"} Jan 21 15:40:54 crc kubenswrapper[4739]: I0121 15:40:54.889838 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:54 crc kubenswrapper[4739]: I0121 15:40:54.891857 4739 generic.go:334] "Generic (PLEG): container finished" podID="de79a4b1-6301-4c43-ae80-14834d2d7b54" containerID="765293ee05c60e8ec1c4bab84961f9c331cf77b4dcaff699157b90e67ff6e514" exitCode=0 Jan 21 15:40:54 crc kubenswrapper[4739]: I0121 15:40:54.891900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerDied","Data":"765293ee05c60e8ec1c4bab84961f9c331cf77b4dcaff699157b90e67ff6e514"} Jan 21 15:40:54 crc kubenswrapper[4739]: I0121 15:40:54.923434 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" podStartSLOduration=2.423768582 podStartE2EDuration="14.923410491s" podCreationTimestamp="2026-01-21 15:40:40 +0000 UTC" firstStartedPulling="2026-01-21 15:40:42.062218134 +0000 UTC m=+873.752924398" lastFinishedPulling="2026-01-21 15:40:54.561860043 +0000 UTC m=+886.252566307" observedRunningTime="2026-01-21 15:40:54.920658126 +0000 UTC m=+886.611364390" watchObservedRunningTime="2026-01-21 15:40:54.923410491 +0000 UTC m=+886.614116755" Jan 21 15:40:55 crc kubenswrapper[4739]: I0121 15:40:55.899409 4739 generic.go:334] "Generic (PLEG): container finished" podID="de79a4b1-6301-4c43-ae80-14834d2d7b54" containerID="9742fc311ce63498afa8c64a16a1ea4705595e36fb56ac65ce3c6a484d381437" exitCode=0 Jan 21 15:40:55 crc kubenswrapper[4739]: I0121 15:40:55.899482 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerDied","Data":"9742fc311ce63498afa8c64a16a1ea4705595e36fb56ac65ce3c6a484d381437"} Jan 21 15:40:56 crc kubenswrapper[4739]: I0121 15:40:56.911162 4739 generic.go:334] "Generic (PLEG): container finished" podID="de79a4b1-6301-4c43-ae80-14834d2d7b54" containerID="a49a01192b73408cb35c9ec0930c66f4fac01a368e560e3dee3fb40da76641e0" exitCode=0 Jan 21 15:40:56 crc kubenswrapper[4739]: I0121 15:40:56.911454 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerDied","Data":"a49a01192b73408cb35c9ec0930c66f4fac01a368e560e3dee3fb40da76641e0"} Jan 21 15:40:57 crc kubenswrapper[4739]: I0121 15:40:57.919972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"91cd5971f9c90e2fd53d7db9ba8c3e1f100cab529f53cf199198cf661a5ab58c"} Jan 21 15:40:57 crc kubenswrapper[4739]: I0121 15:40:57.920744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"b6c67bde586769cc52ff27406c79335bcf815f5a7f762874e649497a11113478"} Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.933783 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"81b35be6a910b91a6219ad60435324bda44374591ac5840d4b9783feb08e30d5"} Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.934026 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"0393fffb91efef395611ef11b58f86be81ebb0a72c3fc818dbae4ef857977cce"} Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.934035 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"da5c5e8d616ee10344c6926a024136f5587a2e735d2b575a7cc17a30f1be56c6"} Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.934043 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"dc736db97ce864bd815c1b522f861b70ce234c2ca608b94af3b72ab34762cd47"} Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.934085 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.965923 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4cfnm" podStartSLOduration=5.957938908 podStartE2EDuration="18.965902171s" podCreationTimestamp="2026-01-21 15:40:40 +0000 UTC" firstStartedPulling="2026-01-21 15:40:41.53776965 +0000 UTC m=+873.228475904" lastFinishedPulling="2026-01-21 15:40:54.545732903 +0000 UTC m=+886.236439167" observedRunningTime="2026-01-21 15:40:58.959466295 +0000 UTC m=+890.650172569" watchObservedRunningTime="2026-01-21 15:40:58.965902171 +0000 UTC m=+890.656608435" Jan 21 15:41:01 crc kubenswrapper[4739]: I0121 15:41:01.044566 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:41:01 crc kubenswrapper[4739]: I0121 15:41:01.079378 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:41:02 crc kubenswrapper[4739]: I0121 15:41:02.661428 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-hgxx6" Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.222895 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.222955 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.961959 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.963372 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.966633 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2bxlr" Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.968070 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.971448 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.036119 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.096955 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9jf6\" (UniqueName: \"kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6\") pod \"openstack-operator-index-zl5j4\" (UID: \"794a1665-fdb1-425b-bf12-f6a8159e2d33\") " pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.197961 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9jf6\" (UniqueName: \"kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6\") pod \"openstack-operator-index-zl5j4\" (UID: \"794a1665-fdb1-425b-bf12-f6a8159e2d33\") " pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.214995 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9jf6\" (UniqueName: \"kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6\") pod \"openstack-operator-index-zl5j4\" (UID: \"794a1665-fdb1-425b-bf12-f6a8159e2d33\") " pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.284115 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.724947 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.997496 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zl5j4" event={"ID":"794a1665-fdb1-425b-bf12-f6a8159e2d33","Type":"ContainerStarted","Data":"f9d6b28bf8b3702f81aa07d3be9110b43ff7cc98c8df2f5c9dab8d2fe84bdb5b"} Jan 21 15:41:09 crc kubenswrapper[4739]: I0121 15:41:09.334095 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:09 crc kubenswrapper[4739]: I0121 15:41:09.947383 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ggtdm"] Jan 21 15:41:09 crc kubenswrapper[4739]: I0121 15:41:09.948512 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:09 crc kubenswrapper[4739]: I0121 15:41:09.978373 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ggtdm"] Jan 21 15:41:10 crc kubenswrapper[4739]: I0121 15:41:10.048145 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr25h\" (UniqueName: \"kubernetes.io/projected/50c62dc2-9ca0-4c34-9043-e5a859e7d931-kube-api-access-tr25h\") pod \"openstack-operator-index-ggtdm\" (UID: \"50c62dc2-9ca0-4c34-9043-e5a859e7d931\") " pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:10 crc kubenswrapper[4739]: I0121 15:41:10.149478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr25h\" (UniqueName: \"kubernetes.io/projected/50c62dc2-9ca0-4c34-9043-e5a859e7d931-kube-api-access-tr25h\") pod \"openstack-operator-index-ggtdm\" (UID: \"50c62dc2-9ca0-4c34-9043-e5a859e7d931\") " pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:10 crc kubenswrapper[4739]: I0121 15:41:10.167483 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr25h\" (UniqueName: \"kubernetes.io/projected/50c62dc2-9ca0-4c34-9043-e5a859e7d931-kube-api-access-tr25h\") pod \"openstack-operator-index-ggtdm\" (UID: \"50c62dc2-9ca0-4c34-9043-e5a859e7d931\") " pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:10 crc kubenswrapper[4739]: I0121 15:41:10.281719 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:10 crc kubenswrapper[4739]: I0121 15:41:10.708634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ggtdm"] Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.023516 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ggtdm" event={"ID":"50c62dc2-9ca0-4c34-9043-e5a859e7d931","Type":"ContainerStarted","Data":"79fd40d317fde9484f549c79640515ba8fb0dd00419231079f1be6f376cc1015"} Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.025269 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zl5j4" event={"ID":"794a1665-fdb1-425b-bf12-f6a8159e2d33","Type":"ContainerStarted","Data":"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5"} Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.025389 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-zl5j4" podUID="794a1665-fdb1-425b-bf12-f6a8159e2d33" containerName="registry-server" containerID="cri-o://dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5" gracePeriod=2 Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.052747 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.085128 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zl5j4" podStartSLOduration=2.851987007 podStartE2EDuration="6.085111715s" podCreationTimestamp="2026-01-21 15:41:05 +0000 UTC" firstStartedPulling="2026-01-21 15:41:06.742597126 +0000 UTC m=+898.433303390" lastFinishedPulling="2026-01-21 15:41:09.975721834 +0000 UTC m=+901.666428098" observedRunningTime="2026-01-21 15:41:11.05157347 +0000 UTC m=+902.742279764" watchObservedRunningTime="2026-01-21 15:41:11.085111715 +0000 UTC m=+902.775817979" Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.419620 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.465708 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9jf6\" (UniqueName: \"kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6\") pod \"794a1665-fdb1-425b-bf12-f6a8159e2d33\" (UID: \"794a1665-fdb1-425b-bf12-f6a8159e2d33\") " Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.471398 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6" (OuterVolumeSpecName: "kube-api-access-c9jf6") pod "794a1665-fdb1-425b-bf12-f6a8159e2d33" (UID: "794a1665-fdb1-425b-bf12-f6a8159e2d33"). InnerVolumeSpecName "kube-api-access-c9jf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.567638 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9jf6\" (UniqueName: \"kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.656514 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.034269 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ggtdm" event={"ID":"50c62dc2-9ca0-4c34-9043-e5a859e7d931","Type":"ContainerStarted","Data":"e9702cf64800511344b1f4519411aefd1caa6e408f1bf887d348e7d6733dbd18"} Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.037101 4739 generic.go:334] "Generic (PLEG): container finished" podID="794a1665-fdb1-425b-bf12-f6a8159e2d33" containerID="dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5" exitCode=0 Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.037153 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.037156 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zl5j4" event={"ID":"794a1665-fdb1-425b-bf12-f6a8159e2d33","Type":"ContainerDied","Data":"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5"} Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.037457 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zl5j4" event={"ID":"794a1665-fdb1-425b-bf12-f6a8159e2d33","Type":"ContainerDied","Data":"f9d6b28bf8b3702f81aa07d3be9110b43ff7cc98c8df2f5c9dab8d2fe84bdb5b"} Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.037488 4739 scope.go:117] "RemoveContainer" containerID="dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.057501 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ggtdm" podStartSLOduration=2.6911370359999998 podStartE2EDuration="3.057484296s" podCreationTimestamp="2026-01-21 15:41:09 +0000 UTC" firstStartedPulling="2026-01-21 15:41:10.726173247 +0000 UTC m=+902.416879521" lastFinishedPulling="2026-01-21 15:41:11.092520517 +0000 UTC m=+902.783226781" observedRunningTime="2026-01-21 15:41:12.053755064 +0000 UTC m=+903.744461318" watchObservedRunningTime="2026-01-21 15:41:12.057484296 +0000 UTC m=+903.748190560" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.066422 4739 scope.go:117] "RemoveContainer" containerID="dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5" Jan 21 15:41:12 crc kubenswrapper[4739]: E0121 15:41:12.066991 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5\": container with ID starting with dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5 not found: ID does not exist" containerID="dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.067031 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5"} err="failed to get container status \"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5\": rpc error: code = NotFound desc = could not find container \"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5\": container with ID starting with dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5 not found: ID does not exist" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.084891 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.089380 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.789405 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="794a1665-fdb1-425b-bf12-f6a8159e2d33" path="/var/lib/kubelet/pods/794a1665-fdb1-425b-bf12-f6a8159e2d33/volumes" Jan 21 15:41:20 crc kubenswrapper[4739]: I0121 15:41:20.282706 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:20 crc kubenswrapper[4739]: I0121 15:41:20.283991 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:20 crc kubenswrapper[4739]: I0121 15:41:20.307263 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:21 crc kubenswrapper[4739]: I0121 15:41:21.114709 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.607261 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj"] Jan 21 15:41:26 crc kubenswrapper[4739]: E0121 15:41:26.607774 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794a1665-fdb1-425b-bf12-f6a8159e2d33" containerName="registry-server" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.607785 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="794a1665-fdb1-425b-bf12-f6a8159e2d33" containerName="registry-server" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.607917 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="794a1665-fdb1-425b-bf12-f6a8159e2d33" containerName="registry-server" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.612285 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.614239 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jlh95" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.618287 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj"] Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.658135 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.658224 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ncn\" (UniqueName: \"kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.658303 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.759825 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.759904 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.759929 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4ncn\" (UniqueName: \"kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.760483 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.760660 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.778923 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4ncn\" (UniqueName: \"kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.932306 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:27 crc kubenswrapper[4739]: I0121 15:41:27.358622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj"] Jan 21 15:41:28 crc kubenswrapper[4739]: I0121 15:41:28.145177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerStarted","Data":"8627d44344d8198af3d86cb504e4bdbc5b1d38ba02355709b97d204bb11b0b38"} Jan 21 15:41:29 crc kubenswrapper[4739]: I0121 15:41:29.151572 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerStarted","Data":"7be289435f97846cb380decef119d091aca0afdd3616b1aaab1fe74177ffdbec"} Jan 21 15:41:30 crc kubenswrapper[4739]: I0121 15:41:30.157760 4739 generic.go:334] "Generic (PLEG): container finished" podID="66a0a937-81d6-4e62-a393-323a426820e2" containerID="7be289435f97846cb380decef119d091aca0afdd3616b1aaab1fe74177ffdbec" exitCode=0 Jan 21 15:41:30 crc kubenswrapper[4739]: I0121 15:41:30.157795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerDied","Data":"7be289435f97846cb380decef119d091aca0afdd3616b1aaab1fe74177ffdbec"} Jan 21 15:41:31 crc kubenswrapper[4739]: I0121 15:41:31.166445 4739 generic.go:334] "Generic (PLEG): container finished" podID="66a0a937-81d6-4e62-a393-323a426820e2" containerID="2133aafe4b0e82e09aedfbe949422065672a1ed9706c7118d9ff71940715d40d" exitCode=0 Jan 21 15:41:31 crc kubenswrapper[4739]: I0121 15:41:31.166510 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerDied","Data":"2133aafe4b0e82e09aedfbe949422065672a1ed9706c7118d9ff71940715d40d"} Jan 21 15:41:32 crc kubenswrapper[4739]: I0121 15:41:32.174389 4739 generic.go:334] "Generic (PLEG): container finished" podID="66a0a937-81d6-4e62-a393-323a426820e2" containerID="7e322757f51a7bd4ed080aeb0b150941f39a56ff1f0eac1aff540022da851985" exitCode=0 Jan 21 15:41:32 crc kubenswrapper[4739]: I0121 15:41:32.174443 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerDied","Data":"7e322757f51a7bd4ed080aeb0b150941f39a56ff1f0eac1aff540022da851985"} Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.495669 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.557251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle\") pod \"66a0a937-81d6-4e62-a393-323a426820e2\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.557419 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util\") pod \"66a0a937-81d6-4e62-a393-323a426820e2\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.557462 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4ncn\" (UniqueName: \"kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn\") pod \"66a0a937-81d6-4e62-a393-323a426820e2\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.558369 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle" (OuterVolumeSpecName: "bundle") pod "66a0a937-81d6-4e62-a393-323a426820e2" (UID: "66a0a937-81d6-4e62-a393-323a426820e2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.571245 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util" (OuterVolumeSpecName: "util") pod "66a0a937-81d6-4e62-a393-323a426820e2" (UID: "66a0a937-81d6-4e62-a393-323a426820e2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.571582 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn" (OuterVolumeSpecName: "kube-api-access-h4ncn") pod "66a0a937-81d6-4e62-a393-323a426820e2" (UID: "66a0a937-81d6-4e62-a393-323a426820e2"). InnerVolumeSpecName "kube-api-access-h4ncn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.660089 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.660194 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.660208 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4ncn\" (UniqueName: \"kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.188164 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerDied","Data":"8627d44344d8198af3d86cb504e4bdbc5b1d38ba02355709b97d204bb11b0b38"} Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.188211 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8627d44344d8198af3d86cb504e4bdbc5b1d38ba02355709b97d204bb11b0b38" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.188213 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.962583 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:34 crc kubenswrapper[4739]: E0121 15:41:34.962885 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="extract" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.962903 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="extract" Jan 21 15:41:34 crc kubenswrapper[4739]: E0121 15:41:34.962935 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="util" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.962943 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="util" Jan 21 15:41:34 crc kubenswrapper[4739]: E0121 15:41:34.962957 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="pull" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.962964 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="pull" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.963092 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="extract" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.964120 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.978021 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.978408 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ps82\" (UniqueName: \"kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.978524 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.990709 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.080590 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.080663 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.080909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ps82\" (UniqueName: \"kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.081147 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.081604 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.099151 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ps82\" (UniqueName: \"kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.223009 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.223071 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.290890 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.744613 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:35 crc kubenswrapper[4739]: W0121 15:41:35.759522 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d7edc0_64e0_4918_bf3f_685841092edd.slice/crio-bf5268a9f7c56d59e7ea2b17248e9aedd5d646cc0da253c6654a476755fe7fc2 WatchSource:0}: Error finding container bf5268a9f7c56d59e7ea2b17248e9aedd5d646cc0da253c6654a476755fe7fc2: Status 404 returned error can't find the container with id bf5268a9f7c56d59e7ea2b17248e9aedd5d646cc0da253c6654a476755fe7fc2 Jan 21 15:41:36 crc kubenswrapper[4739]: I0121 15:41:36.201461 4739 generic.go:334] "Generic (PLEG): container finished" podID="76d7edc0-64e0-4918-bf3f-685841092edd" containerID="686c93b73b4d24741af9e24e7d98ba9dbf10103a9830130efa0cc35b5d75bc92" exitCode=0 Jan 21 15:41:36 crc kubenswrapper[4739]: I0121 15:41:36.201505 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerDied","Data":"686c93b73b4d24741af9e24e7d98ba9dbf10103a9830130efa0cc35b5d75bc92"} Jan 21 15:41:36 crc kubenswrapper[4739]: I0121 15:41:36.201530 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerStarted","Data":"bf5268a9f7c56d59e7ea2b17248e9aedd5d646cc0da253c6654a476755fe7fc2"} Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.218432 4739 generic.go:334] "Generic (PLEG): container finished" podID="76d7edc0-64e0-4918-bf3f-685841092edd" containerID="71c4767b74902e7ad5708ad491cc04aa972db2bbaec6b87144aabbcdbd58e42e" exitCode=0 Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.218552 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerDied","Data":"71c4767b74902e7ad5708ad491cc04aa972db2bbaec6b87144aabbcdbd58e42e"} Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.807707 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x"] Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.808634 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.839779 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-rjqnz" Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.840216 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q78q\" (UniqueName: \"kubernetes.io/projected/2c4ac48b-8e08-41e5-981c-a57ba6c23f52-kube-api-access-7q78q\") pod \"openstack-operator-controller-init-7f8fb8b79-trb6x\" (UID: \"2c4ac48b-8e08-41e5-981c-a57ba6c23f52\") " pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.941122 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q78q\" (UniqueName: \"kubernetes.io/projected/2c4ac48b-8e08-41e5-981c-a57ba6c23f52-kube-api-access-7q78q\") pod \"openstack-operator-controller-init-7f8fb8b79-trb6x\" (UID: \"2c4ac48b-8e08-41e5-981c-a57ba6c23f52\") " pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.960589 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q78q\" (UniqueName: \"kubernetes.io/projected/2c4ac48b-8e08-41e5-981c-a57ba6c23f52-kube-api-access-7q78q\") pod \"openstack-operator-controller-init-7f8fb8b79-trb6x\" (UID: \"2c4ac48b-8e08-41e5-981c-a57ba6c23f52\") " pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.976533 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x"] Jan 21 15:41:39 crc kubenswrapper[4739]: I0121 15:41:39.123853 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:41:39 crc kubenswrapper[4739]: I0121 15:41:39.457980 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x"] Jan 21 15:41:39 crc kubenswrapper[4739]: W0121 15:41:39.462184 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c4ac48b_8e08_41e5_981c_a57ba6c23f52.slice/crio-fa3ba1e1cfc0bea3c6abd5aa50d2279512eaed1523541610268a3971d8f5e286 WatchSource:0}: Error finding container fa3ba1e1cfc0bea3c6abd5aa50d2279512eaed1523541610268a3971d8f5e286: Status 404 returned error can't find the container with id fa3ba1e1cfc0bea3c6abd5aa50d2279512eaed1523541610268a3971d8f5e286 Jan 21 15:41:40 crc kubenswrapper[4739]: I0121 15:41:40.240580 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" event={"ID":"2c4ac48b-8e08-41e5-981c-a57ba6c23f52","Type":"ContainerStarted","Data":"fa3ba1e1cfc0bea3c6abd5aa50d2279512eaed1523541610268a3971d8f5e286"} Jan 21 15:41:41 crc kubenswrapper[4739]: I0121 15:41:41.258255 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerStarted","Data":"6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67"} Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.147048 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ksr8q" podStartSLOduration=5.248637549 podStartE2EDuration="9.147030012s" podCreationTimestamp="2026-01-21 15:41:34 +0000 UTC" firstStartedPulling="2026-01-21 15:41:36.214567625 +0000 UTC m=+927.905273879" lastFinishedPulling="2026-01-21 15:41:40.112960078 +0000 UTC m=+931.803666342" observedRunningTime="2026-01-21 15:41:41.295313109 +0000 UTC m=+932.986019373" watchObservedRunningTime="2026-01-21 15:41:43.147030012 +0000 UTC m=+934.837736276" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.155215 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.156310 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.162590 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.208005 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.208069 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.208200 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfpkh\" (UniqueName: \"kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.309901 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfpkh\" (UniqueName: \"kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.309971 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.310023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.310543 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.310585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.340741 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfpkh\" (UniqueName: \"kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.479030 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:45 crc kubenswrapper[4739]: I0121 15:41:45.292113 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:45 crc kubenswrapper[4739]: I0121 15:41:45.292452 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:45 crc kubenswrapper[4739]: I0121 15:41:45.327963 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:46 crc kubenswrapper[4739]: I0121 15:41:46.336318 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:48 crc kubenswrapper[4739]: I0121 15:41:48.936645 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:48 crc kubenswrapper[4739]: I0121 15:41:48.937156 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ksr8q" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="registry-server" containerID="cri-o://6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" gracePeriod=2 Jan 21 15:41:51 crc kubenswrapper[4739]: I0121 15:41:51.321420 4739 generic.go:334] "Generic (PLEG): container finished" podID="76d7edc0-64e0-4918-bf3f-685841092edd" containerID="6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" exitCode=0 Jan 21 15:41:51 crc kubenswrapper[4739]: I0121 15:41:51.321520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerDied","Data":"6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67"} Jan 21 15:41:51 crc kubenswrapper[4739]: I0121 15:41:51.944251 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:41:51 crc kubenswrapper[4739]: I0121 15:41:51.946233 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:51 crc kubenswrapper[4739]: I0121 15:41:51.957748 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.018277 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wm5n\" (UniqueName: \"kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.018320 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.018375 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.118829 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wm5n\" (UniqueName: \"kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.118889 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.118961 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.119461 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.119490 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.155444 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wm5n\" (UniqueName: \"kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.264438 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:55 crc kubenswrapper[4739]: E0121 15:41:55.292110 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67 is running failed: container process not found" containerID="6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:41:55 crc kubenswrapper[4739]: E0121 15:41:55.293091 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67 is running failed: container process not found" containerID="6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:41:55 crc kubenswrapper[4739]: E0121 15:41:55.293468 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67 is running failed: container process not found" containerID="6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:41:55 crc kubenswrapper[4739]: E0121 15:41:55.293501 4739 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-ksr8q" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="registry-server" Jan 21 15:41:55 crc kubenswrapper[4739]: I0121 15:41:55.961046 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.068266 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ps82\" (UniqueName: \"kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82\") pod \"76d7edc0-64e0-4918-bf3f-685841092edd\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.068332 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content\") pod \"76d7edc0-64e0-4918-bf3f-685841092edd\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.068422 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities\") pod \"76d7edc0-64e0-4918-bf3f-685841092edd\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.069548 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities" (OuterVolumeSpecName: "utilities") pod "76d7edc0-64e0-4918-bf3f-685841092edd" (UID: "76d7edc0-64e0-4918-bf3f-685841092edd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.073410 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82" (OuterVolumeSpecName: "kube-api-access-2ps82") pod "76d7edc0-64e0-4918-bf3f-685841092edd" (UID: "76d7edc0-64e0-4918-bf3f-685841092edd"). InnerVolumeSpecName "kube-api-access-2ps82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.089136 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76d7edc0-64e0-4918-bf3f-685841092edd" (UID: "76d7edc0-64e0-4918-bf3f-685841092edd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.169661 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.169700 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ps82\" (UniqueName: \"kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.169710 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.351726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerDied","Data":"bf5268a9f7c56d59e7ea2b17248e9aedd5d646cc0da253c6654a476755fe7fc2"} Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.351778 4739 scope.go:117] "RemoveContainer" containerID="6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.351911 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.383057 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.392513 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.791896 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" path="/var/lib/kubelet/pods/76d7edc0-64e0-4918-bf3f-685841092edd/volumes" Jan 21 15:41:59 crc kubenswrapper[4739]: I0121 15:41:59.197933 4739 scope.go:117] "RemoveContainer" containerID="71c4767b74902e7ad5708ad491cc04aa972db2bbaec6b87144aabbcdbd58e42e" Jan 21 15:41:59 crc kubenswrapper[4739]: I0121 15:41:59.245317 4739 scope.go:117] "RemoveContainer" containerID="686c93b73b4d24741af9e24e7d98ba9dbf10103a9830130efa0cc35b5d75bc92" Jan 21 15:41:59 crc kubenswrapper[4739]: I0121 15:41:59.414539 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:41:59 crc kubenswrapper[4739]: W0121 15:41:59.425041 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f476707_f231_44f8_8385_7e927a2a6130.slice/crio-dbc10e6d1ab483418751b08a04a1dc809c8bee3b33b98eac53b00d4bbf8e939c WatchSource:0}: Error finding container dbc10e6d1ab483418751b08a04a1dc809c8bee3b33b98eac53b00d4bbf8e939c: Status 404 returned error can't find the container with id dbc10e6d1ab483418751b08a04a1dc809c8bee3b33b98eac53b00d4bbf8e939c Jan 21 15:41:59 crc kubenswrapper[4739]: I0121 15:41:59.650689 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:41:59 crc kubenswrapper[4739]: W0121 15:41:59.656003 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ffa92d_2446_4f9e_8964_f6ab87c78432.slice/crio-df2500a1265324116394a99aeb4b941172b1036dbdde830a3ef2e729bd120596 WatchSource:0}: Error finding container df2500a1265324116394a99aeb4b941172b1036dbdde830a3ef2e729bd120596: Status 404 returned error can't find the container with id df2500a1265324116394a99aeb4b941172b1036dbdde830a3ef2e729bd120596 Jan 21 15:42:00 crc kubenswrapper[4739]: E0121 15:42:00.015971 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.27:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd" Jan 21 15:42:00 crc kubenswrapper[4739]: E0121 15:42:00.016030 4739 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.27:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd" Jan 21 15:42:00 crc kubenswrapper[4739]: E0121 15:42:00.016531 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:38.129.56.27:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd,Command:[/operator],Args:[--leader-elect --health-probe-bind-address=:8081],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:TEST_TOBIKO_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tobiko:current-podified,ValueFrom:nil,},EnvVar{Name:TEST_ANSIBLETEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified,ValueFrom:nil,},EnvVar{Name:TEST_HORIZONTEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizontest:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/infra-operator@sha256:b262df0f889c0ffaa53e3c6c5f40356d2baf9a814f3c20a4ce9a2051f0597238,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_BAREMETAL_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_CLUSTER_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TELEMETRY_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,ValueFrom:nil,},EnvVar{Name:OPENSTACK_RELEASE_VERSION,Value:0.5.0-1769008249,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_URL,Value:38.129.56.27:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:openstack-operator.v0.5.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{268435456 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7q78q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-operator-controller-init-7f8fb8b79-trb6x_openstack-operators(2c4ac48b-8e08-41e5-981c-a57ba6c23f52): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:42:00 crc kubenswrapper[4739]: E0121 15:42:00.019606 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.379866 4739 generic.go:334] "Generic (PLEG): container finished" podID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerID="42c06b8c5faf386bffad9481ad51d7e0d6f43a510a37dd8017983d12900c49d9" exitCode=0 Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.380935 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerDied","Data":"42c06b8c5faf386bffad9481ad51d7e0d6f43a510a37dd8017983d12900c49d9"} Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.380967 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerStarted","Data":"df2500a1265324116394a99aeb4b941172b1036dbdde830a3ef2e729bd120596"} Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.382328 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f476707-f231-44f8-8385-7e927a2a6130" containerID="f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818" exitCode=0 Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.382858 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerDied","Data":"f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818"} Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.382884 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerStarted","Data":"dbc10e6d1ab483418751b08a04a1dc809c8bee3b33b98eac53b00d4bbf8e939c"} Jan 21 15:42:00 crc kubenswrapper[4739]: E0121 15:42:00.383460 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.27:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd\\\"\"" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.223040 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.223650 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.223715 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.224406 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.224474 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29" gracePeriod=600 Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.418630 4739 generic.go:334] "Generic (PLEG): container finished" podID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerID="2b353fba72b5ed2ee4e4b2076f212bbfae6d9cc7aa0e1ee5117bc8080c3564ab" exitCode=0 Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.418699 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerDied","Data":"2b353fba72b5ed2ee4e4b2076f212bbfae6d9cc7aa0e1ee5117bc8080c3564ab"} Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.423888 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f476707-f231-44f8-8385-7e927a2a6130" containerID="c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b" exitCode=0 Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.423977 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerDied","Data":"c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b"} Jan 21 15:42:07 crc kubenswrapper[4739]: I0121 15:42:07.438638 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29" exitCode=0 Jan 21 15:42:07 crc kubenswrapper[4739]: I0121 15:42:07.438687 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29"} Jan 21 15:42:07 crc kubenswrapper[4739]: I0121 15:42:07.438723 4739 scope.go:117] "RemoveContainer" containerID="6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5" Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.553760 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" event={"ID":"2c4ac48b-8e08-41e5-981c-a57ba6c23f52","Type":"ContainerStarted","Data":"e20a31684f043b8b7fe888ff80e2129976d0ecb201f2276302eb1086cd7da9be"} Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.554524 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.556764 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c"} Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.558926 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerStarted","Data":"ad64fa225f3888923529f5db4e89fc2a55d2fc9271d99ac7bbe03c63e49bd4b1"} Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.560635 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerStarted","Data":"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a"} Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.596121 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podStartSLOduration=2.607548941 podStartE2EDuration="46.59609744s" podCreationTimestamp="2026-01-21 15:41:38 +0000 UTC" firstStartedPulling="2026-01-21 15:41:39.465613095 +0000 UTC m=+931.156319359" lastFinishedPulling="2026-01-21 15:42:23.454161594 +0000 UTC m=+975.144867858" observedRunningTime="2026-01-21 15:42:24.591133776 +0000 UTC m=+976.281840050" watchObservedRunningTime="2026-01-21 15:42:24.59609744 +0000 UTC m=+976.286803704" Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.629329 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mzpvr" podStartSLOduration=18.607168192 podStartE2EDuration="41.629312871s" podCreationTimestamp="2026-01-21 15:41:43 +0000 UTC" firstStartedPulling="2026-01-21 15:42:00.382961664 +0000 UTC m=+952.073667928" lastFinishedPulling="2026-01-21 15:42:23.405106343 +0000 UTC m=+975.095812607" observedRunningTime="2026-01-21 15:42:24.628903339 +0000 UTC m=+976.319609603" watchObservedRunningTime="2026-01-21 15:42:24.629312871 +0000 UTC m=+976.320019125" Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.649356 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6df2j" podStartSLOduration=11.600493685 podStartE2EDuration="33.649336414s" podCreationTimestamp="2026-01-21 15:41:51 +0000 UTC" firstStartedPulling="2026-01-21 15:42:01.389300141 +0000 UTC m=+953.080006415" lastFinishedPulling="2026-01-21 15:42:23.43814288 +0000 UTC m=+975.128849144" observedRunningTime="2026-01-21 15:42:24.644971836 +0000 UTC m=+976.335678100" watchObservedRunningTime="2026-01-21 15:42:24.649336414 +0000 UTC m=+976.340042678" Jan 21 15:42:29 crc kubenswrapper[4739]: I0121 15:42:29.126432 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:42:32 crc kubenswrapper[4739]: I0121 15:42:32.265678 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:32 crc kubenswrapper[4739]: I0121 15:42:32.267208 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:32 crc kubenswrapper[4739]: I0121 15:42:32.310411 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:32 crc kubenswrapper[4739]: I0121 15:42:32.643680 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:32 crc kubenswrapper[4739]: I0121 15:42:32.687139 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:42:33 crc kubenswrapper[4739]: I0121 15:42:33.479775 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:33 crc kubenswrapper[4739]: I0121 15:42:33.479848 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:33 crc kubenswrapper[4739]: I0121 15:42:33.525440 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:33 crc kubenswrapper[4739]: I0121 15:42:33.650149 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:34 crc kubenswrapper[4739]: I0121 15:42:34.612438 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6df2j" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="registry-server" containerID="cri-o://e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a" gracePeriod=2 Jan 21 15:42:34 crc kubenswrapper[4739]: I0121 15:42:34.946077 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.026778 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.107605 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content\") pod \"3f476707-f231-44f8-8385-7e927a2a6130\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.107688 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wm5n\" (UniqueName: \"kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n\") pod \"3f476707-f231-44f8-8385-7e927a2a6130\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.107775 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities\") pod \"3f476707-f231-44f8-8385-7e927a2a6130\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.108695 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities" (OuterVolumeSpecName: "utilities") pod "3f476707-f231-44f8-8385-7e927a2a6130" (UID: "3f476707-f231-44f8-8385-7e927a2a6130"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.132715 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n" (OuterVolumeSpecName: "kube-api-access-5wm5n") pod "3f476707-f231-44f8-8385-7e927a2a6130" (UID: "3f476707-f231-44f8-8385-7e927a2a6130"). InnerVolumeSpecName "kube-api-access-5wm5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.174250 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f476707-f231-44f8-8385-7e927a2a6130" (UID: "3f476707-f231-44f8-8385-7e927a2a6130"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.209475 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.209521 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wm5n\" (UniqueName: \"kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.209535 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620096 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f476707-f231-44f8-8385-7e927a2a6130" containerID="e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a" exitCode=0 Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620186 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerDied","Data":"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a"} Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620229 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerDied","Data":"dbc10e6d1ab483418751b08a04a1dc809c8bee3b33b98eac53b00d4bbf8e939c"} Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620246 4739 scope.go:117] "RemoveContainer" containerID="e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620682 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mzpvr" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="registry-server" containerID="cri-o://ad64fa225f3888923529f5db4e89fc2a55d2fc9271d99ac7bbe03c63e49bd4b1" gracePeriod=2 Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620980 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.641914 4739 scope.go:117] "RemoveContainer" containerID="c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.654129 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.659667 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.663104 4739 scope.go:117] "RemoveContainer" containerID="f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.704720 4739 scope.go:117] "RemoveContainer" containerID="e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a" Jan 21 15:42:35 crc kubenswrapper[4739]: E0121 15:42:35.705263 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a\": container with ID starting with e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a not found: ID does not exist" containerID="e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.705301 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a"} err="failed to get container status \"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a\": rpc error: code = NotFound desc = could not find container \"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a\": container with ID starting with e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a not found: ID does not exist" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.705327 4739 scope.go:117] "RemoveContainer" containerID="c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b" Jan 21 15:42:35 crc kubenswrapper[4739]: E0121 15:42:35.705572 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b\": container with ID starting with c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b not found: ID does not exist" containerID="c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.705601 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b"} err="failed to get container status \"c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b\": rpc error: code = NotFound desc = could not find container \"c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b\": container with ID starting with c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b not found: ID does not exist" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.705619 4739 scope.go:117] "RemoveContainer" containerID="f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818" Jan 21 15:42:35 crc kubenswrapper[4739]: E0121 15:42:35.705812 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818\": container with ID starting with f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818 not found: ID does not exist" containerID="f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.705859 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818"} err="failed to get container status \"f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818\": rpc error: code = NotFound desc = could not find container \"f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818\": container with ID starting with f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818 not found: ID does not exist" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.635070 4739 generic.go:334] "Generic (PLEG): container finished" podID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerID="ad64fa225f3888923529f5db4e89fc2a55d2fc9271d99ac7bbe03c63e49bd4b1" exitCode=0 Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.635166 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerDied","Data":"ad64fa225f3888923529f5db4e89fc2a55d2fc9271d99ac7bbe03c63e49bd4b1"} Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.747858 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.789176 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f476707-f231-44f8-8385-7e927a2a6130" path="/var/lib/kubelet/pods/3f476707-f231-44f8-8385-7e927a2a6130/volumes" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.834640 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfpkh\" (UniqueName: \"kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh\") pod \"23ffa92d-2446-4f9e-8964-f6ab87c78432\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.834716 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities\") pod \"23ffa92d-2446-4f9e-8964-f6ab87c78432\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.834761 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content\") pod \"23ffa92d-2446-4f9e-8964-f6ab87c78432\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.835615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities" (OuterVolumeSpecName: "utilities") pod "23ffa92d-2446-4f9e-8964-f6ab87c78432" (UID: "23ffa92d-2446-4f9e-8964-f6ab87c78432"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.838718 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh" (OuterVolumeSpecName: "kube-api-access-jfpkh") pod "23ffa92d-2446-4f9e-8964-f6ab87c78432" (UID: "23ffa92d-2446-4f9e-8964-f6ab87c78432"). InnerVolumeSpecName "kube-api-access-jfpkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.918133 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23ffa92d-2446-4f9e-8964-f6ab87c78432" (UID: "23ffa92d-2446-4f9e-8964-f6ab87c78432"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.936442 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfpkh\" (UniqueName: \"kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.936486 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.936501 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.643388 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerDied","Data":"df2500a1265324116394a99aeb4b941172b1036dbdde830a3ef2e729bd120596"} Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.643442 4739 scope.go:117] "RemoveContainer" containerID="ad64fa225f3888923529f5db4e89fc2a55d2fc9271d99ac7bbe03c63e49bd4b1" Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.643555 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.668238 4739 scope.go:117] "RemoveContainer" containerID="2b353fba72b5ed2ee4e4b2076f212bbfae6d9cc7aa0e1ee5117bc8080c3564ab" Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.682590 4739 scope.go:117] "RemoveContainer" containerID="42c06b8c5faf386bffad9481ad51d7e0d6f43a510a37dd8017983d12900c49d9" Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.700307 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.708237 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:42:38 crc kubenswrapper[4739]: I0121 15:42:38.791262 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" path="/var/lib/kubelet/pods/23ffa92d-2446-4f9e-8964-f6ab87c78432/volumes" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.908915 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl"] Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909708 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909723 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909738 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909746 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909758 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909766 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909775 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909782 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909797 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909805 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909840 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909848 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909864 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909872 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909885 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909892 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909907 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909915 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.910071 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.910086 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.910103 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.910578 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.921169 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8"] Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.921359 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mlp5s" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.922109 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.929079 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl"] Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.930315 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zqdld" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.944842 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8"] Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.959608 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx"] Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.961055 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.980223 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8m9mj" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.983904 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpwv\" (UniqueName: \"kubernetes.io/projected/ee924d67-3bf6-48e6-b378-244e5912ccf1-kube-api-access-7dpwv\") pod \"barbican-operator-controller-manager-7ddb5c749-phbcl\" (UID: \"ee924d67-3bf6-48e6-b378-244e5912ccf1\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.984000 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz594\" (UniqueName: \"kubernetes.io/projected/c14851f1-903f-4792-93bf-2c147370f312-kube-api-access-dz594\") pod \"cinder-operator-controller-manager-9b68f5989-p94b8\" (UID: \"c14851f1-903f-4792-93bf-2c147370f312\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.984044 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8fx2\" (UniqueName: \"kubernetes.io/projected/83d3bc4f-4498-4f3f-ac28-5832348b73a9-kube-api-access-b8fx2\") pod \"designate-operator-controller-manager-9f958b845-x8qlx\" (UID: \"83d3bc4f-4498-4f3f-ac28-5832348b73a9\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.984653 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.060469 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-h45sn"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.061968 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.064701 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sd482" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.088781 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz594\" (UniqueName: \"kubernetes.io/projected/c14851f1-903f-4792-93bf-2c147370f312-kube-api-access-dz594\") pod \"cinder-operator-controller-manager-9b68f5989-p94b8\" (UID: \"c14851f1-903f-4792-93bf-2c147370f312\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.088844 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67t5\" (UniqueName: \"kubernetes.io/projected/5dcd510c-acad-453b-9777-dfaa2513eef8-kube-api-access-f67t5\") pod \"glance-operator-controller-manager-c6994669c-h45sn\" (UID: \"5dcd510c-acad-453b-9777-dfaa2513eef8\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.088877 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8fx2\" (UniqueName: \"kubernetes.io/projected/83d3bc4f-4498-4f3f-ac28-5832348b73a9-kube-api-access-b8fx2\") pod \"designate-operator-controller-manager-9f958b845-x8qlx\" (UID: \"83d3bc4f-4498-4f3f-ac28-5832348b73a9\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.088924 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpwv\" (UniqueName: \"kubernetes.io/projected/ee924d67-3bf6-48e6-b378-244e5912ccf1-kube-api-access-7dpwv\") pod \"barbican-operator-controller-manager-7ddb5c749-phbcl\" (UID: \"ee924d67-3bf6-48e6-b378-244e5912ccf1\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.111889 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-h45sn"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.116035 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.116769 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.122320 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-57np9" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.130372 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.131131 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.141242 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ql784" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.141989 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.155225 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.158311 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpwv\" (UniqueName: \"kubernetes.io/projected/ee924d67-3bf6-48e6-b378-244e5912ccf1-kube-api-access-7dpwv\") pod \"barbican-operator-controller-manager-7ddb5c749-phbcl\" (UID: \"ee924d67-3bf6-48e6-b378-244e5912ccf1\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.158717 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz594\" (UniqueName: \"kubernetes.io/projected/c14851f1-903f-4792-93bf-2c147370f312-kube-api-access-dz594\") pod \"cinder-operator-controller-manager-9b68f5989-p94b8\" (UID: \"c14851f1-903f-4792-93bf-2c147370f312\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.182949 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.184392 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.187437 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.191229 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xzrtm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.191479 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.192065 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5gxf\" (UniqueName: \"kubernetes.io/projected/ef6032ac-99cd-4ac4-899b-74a9e3b53949-kube-api-access-g5gxf\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.192134 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhkwv\" (UniqueName: \"kubernetes.io/projected/6be2175b-8e2d-48d5-938e-e729cb3ac784-kube-api-access-dhkwv\") pod \"horizon-operator-controller-manager-77d5c5b54f-lk4sx\" (UID: \"6be2175b-8e2d-48d5-938e-e729cb3ac784\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.192189 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j274z\" (UniqueName: \"kubernetes.io/projected/b4ea78b8-c892-42e6-b39b-51d33fdac25a-kube-api-access-j274z\") pod \"heat-operator-controller-manager-594c8c9d5d-gdj28\" (UID: \"b4ea78b8-c892-42e6-b39b-51d33fdac25a\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.192218 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67t5\" (UniqueName: \"kubernetes.io/projected/5dcd510c-acad-453b-9777-dfaa2513eef8-kube-api-access-f67t5\") pod \"glance-operator-controller-manager-c6994669c-h45sn\" (UID: \"5dcd510c-acad-453b-9777-dfaa2513eef8\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.192278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.197184 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8fx2\" (UniqueName: \"kubernetes.io/projected/83d3bc4f-4498-4f3f-ac28-5832348b73a9-kube-api-access-b8fx2\") pod \"designate-operator-controller-manager-9f958b845-x8qlx\" (UID: \"83d3bc4f-4498-4f3f-ac28-5832348b73a9\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.202227 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.203326 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.207528 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vbc8p" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.215742 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.217397 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.227454 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zwxcg" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.234218 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.247168 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.258196 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.263052 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67t5\" (UniqueName: \"kubernetes.io/projected/5dcd510c-acad-453b-9777-dfaa2513eef8-kube-api-access-f67t5\") pod \"glance-operator-controller-manager-c6994669c-h45sn\" (UID: \"5dcd510c-acad-453b-9777-dfaa2513eef8\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.287136 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.293993 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j274z\" (UniqueName: \"kubernetes.io/projected/b4ea78b8-c892-42e6-b39b-51d33fdac25a-kube-api-access-j274z\") pod \"heat-operator-controller-manager-594c8c9d5d-gdj28\" (UID: \"b4ea78b8-c892-42e6-b39b-51d33fdac25a\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.294090 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.294127 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5gxf\" (UniqueName: \"kubernetes.io/projected/ef6032ac-99cd-4ac4-899b-74a9e3b53949-kube-api-access-g5gxf\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.294164 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml27v\" (UniqueName: \"kubernetes.io/projected/f6e1c82f-0872-46ed-b8c7-f54328ee947d-kube-api-access-ml27v\") pod \"ironic-operator-controller-manager-78757b4889-rf69b\" (UID: \"f6e1c82f-0872-46ed-b8c7-f54328ee947d\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.294213 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsnfv\" (UniqueName: \"kubernetes.io/projected/22ce2630-c747-40f4-8f8b-62414689534b-kube-api-access-dsnfv\") pod \"keystone-operator-controller-manager-767fdc4f47-cnccn\" (UID: \"22ce2630-c747-40f4-8f8b-62414689534b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.294254 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhkwv\" (UniqueName: \"kubernetes.io/projected/6be2175b-8e2d-48d5-938e-e729cb3ac784-kube-api-access-dhkwv\") pod \"horizon-operator-controller-manager-77d5c5b54f-lk4sx\" (UID: \"6be2175b-8e2d-48d5-938e-e729cb3ac784\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.294791 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.294861 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert podName:ef6032ac-99cd-4ac4-899b-74a9e3b53949 nodeName:}" failed. No retries permitted until 2026-01-21 15:42:49.794841324 +0000 UTC m=+1001.485547588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert") pod "infra-operator-controller-manager-77c48c7859-zk9pf" (UID: "ef6032ac-99cd-4ac4-899b-74a9e3b53949") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.295184 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.348060 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.349024 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.361517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j274z\" (UniqueName: \"kubernetes.io/projected/b4ea78b8-c892-42e6-b39b-51d33fdac25a-kube-api-access-j274z\") pod \"heat-operator-controller-manager-594c8c9d5d-gdj28\" (UID: \"b4ea78b8-c892-42e6-b39b-51d33fdac25a\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.374064 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5gxf\" (UniqueName: \"kubernetes.io/projected/ef6032ac-99cd-4ac4-899b-74a9e3b53949-kube-api-access-g5gxf\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.375247 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z2cw7" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.377897 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.378915 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.388441 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhkwv\" (UniqueName: \"kubernetes.io/projected/6be2175b-8e2d-48d5-938e-e729cb3ac784-kube-api-access-dhkwv\") pod \"horizon-operator-controller-manager-77d5c5b54f-lk4sx\" (UID: \"6be2175b-8e2d-48d5-938e-e729cb3ac784\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.398253 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cxqd4" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.399073 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rmjb\" (UniqueName: \"kubernetes.io/projected/52d40272-2ec5-451f-9c41-339c2859d40f-kube-api-access-4rmjb\") pod \"manila-operator-controller-manager-864f6b75bf-nc64b\" (UID: \"52d40272-2ec5-451f-9c41-339c2859d40f\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.399180 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml27v\" (UniqueName: \"kubernetes.io/projected/f6e1c82f-0872-46ed-b8c7-f54328ee947d-kube-api-access-ml27v\") pod \"ironic-operator-controller-manager-78757b4889-rf69b\" (UID: \"f6e1c82f-0872-46ed-b8c7-f54328ee947d\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.399228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsnfv\" (UniqueName: \"kubernetes.io/projected/22ce2630-c747-40f4-8f8b-62414689534b-kube-api-access-dsnfv\") pod \"keystone-operator-controller-manager-767fdc4f47-cnccn\" (UID: \"22ce2630-c747-40f4-8f8b-62414689534b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.399269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvbw\" (UniqueName: \"kubernetes.io/projected/4c4bf693-865f-4d6d-ba43-d37a43a2faa0-kube-api-access-fzvbw\") pod \"nova-operator-controller-manager-65849867d6-j4f2g\" (UID: \"4c4bf693-865f-4d6d-ba43-d37a43a2faa0\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.399675 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.404685 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.406569 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.412502 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.437712 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-46j5c" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.441269 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.447891 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.470087 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml27v\" (UniqueName: \"kubernetes.io/projected/f6e1c82f-0872-46ed-b8c7-f54328ee947d-kube-api-access-ml27v\") pod \"ironic-operator-controller-manager-78757b4889-rf69b\" (UID: \"f6e1c82f-0872-46ed-b8c7-f54328ee947d\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.473439 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsnfv\" (UniqueName: \"kubernetes.io/projected/22ce2630-c747-40f4-8f8b-62414689534b-kube-api-access-dsnfv\") pod \"keystone-operator-controller-manager-767fdc4f47-cnccn\" (UID: \"22ce2630-c747-40f4-8f8b-62414689534b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.478804 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.479536 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.481877 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6jsp6" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.487372 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.497261 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.498030 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.500500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qgcm\" (UniqueName: \"kubernetes.io/projected/4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc-kube-api-access-8qgcm\") pod \"mariadb-operator-controller-manager-c87fff755-5pbdz\" (UID: \"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.500664 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzvbw\" (UniqueName: \"kubernetes.io/projected/4c4bf693-865f-4d6d-ba43-d37a43a2faa0-kube-api-access-fzvbw\") pod \"nova-operator-controller-manager-65849867d6-j4f2g\" (UID: \"4c4bf693-865f-4d6d-ba43-d37a43a2faa0\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.500728 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rmjb\" (UniqueName: \"kubernetes.io/projected/52d40272-2ec5-451f-9c41-339c2859d40f-kube-api-access-4rmjb\") pod \"manila-operator-controller-manager-864f6b75bf-nc64b\" (UID: \"52d40272-2ec5-451f-9c41-339c2859d40f\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.500768 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbpb\" (UniqueName: \"kubernetes.io/projected/142b0baa-2c17-4e40-b473-7251e3fefddd-kube-api-access-7zbpb\") pod \"neutron-operator-controller-manager-cb4666565-zzrjd\" (UID: \"142b0baa-2c17-4e40-b473-7251e3fefddd\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.507759 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.508893 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zrszd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.560892 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.587604 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.587828 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.603799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbpb\" (UniqueName: \"kubernetes.io/projected/142b0baa-2c17-4e40-b473-7251e3fefddd-kube-api-access-7zbpb\") pod \"neutron-operator-controller-manager-cb4666565-zzrjd\" (UID: \"142b0baa-2c17-4e40-b473-7251e3fefddd\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.603880 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbq8d\" (UniqueName: \"kubernetes.io/projected/031e8a3d-8560-4f90-a4ee-9303509dc643-kube-api-access-qbq8d\") pod \"octavia-operator-controller-manager-7fc9b76cf6-p74fm\" (UID: \"031e8a3d-8560-4f90-a4ee-9303509dc643\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.603934 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qgcm\" (UniqueName: \"kubernetes.io/projected/4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc-kube-api-access-8qgcm\") pod \"mariadb-operator-controller-manager-c87fff755-5pbdz\" (UID: \"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.614916 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.615922 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.619592 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzvbw\" (UniqueName: \"kubernetes.io/projected/4c4bf693-865f-4d6d-ba43-d37a43a2faa0-kube-api-access-fzvbw\") pod \"nova-operator-controller-manager-65849867d6-j4f2g\" (UID: \"4c4bf693-865f-4d6d-ba43-d37a43a2faa0\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.622283 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-72bbh" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.622289 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.669492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qgcm\" (UniqueName: \"kubernetes.io/projected/4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc-kube-api-access-8qgcm\") pod \"mariadb-operator-controller-manager-c87fff755-5pbdz\" (UID: \"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.672606 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rmjb\" (UniqueName: \"kubernetes.io/projected/52d40272-2ec5-451f-9c41-339c2859d40f-kube-api-access-4rmjb\") pod \"manila-operator-controller-manager-864f6b75bf-nc64b\" (UID: \"52d40272-2ec5-451f-9c41-339c2859d40f\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.691383 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.692115 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.721543 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbpb\" (UniqueName: \"kubernetes.io/projected/142b0baa-2c17-4e40-b473-7251e3fefddd-kube-api-access-7zbpb\") pod \"neutron-operator-controller-manager-cb4666565-zzrjd\" (UID: \"142b0baa-2c17-4e40-b473-7251e3fefddd\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.726081 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2hwch" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.727912 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbq8d\" (UniqueName: \"kubernetes.io/projected/031e8a3d-8560-4f90-a4ee-9303509dc643-kube-api-access-qbq8d\") pod \"octavia-operator-controller-manager-7fc9b76cf6-p74fm\" (UID: \"031e8a3d-8560-4f90-a4ee-9303509dc643\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.759955 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbq8d\" (UniqueName: \"kubernetes.io/projected/031e8a3d-8560-4f90-a4ee-9303509dc643-kube-api-access-qbq8d\") pod \"octavia-operator-controller-manager-7fc9b76cf6-p74fm\" (UID: \"031e8a3d-8560-4f90-a4ee-9303509dc643\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.778060 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.790038 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.816852 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.823037 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.831409 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tbq\" (UniqueName: \"kubernetes.io/projected/d42979af-89f0-4c90-9764-a1bbc4429b2b-kube-api-access-x8tbq\") pod \"ovn-operator-controller-manager-55db956ddc-lmdr4\" (UID: \"d42979af-89f0-4c90-9764-a1bbc4429b2b\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.831465 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.831489 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.831542 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6fbx\" (UniqueName: \"kubernetes.io/projected/23645bd3-1829-4740-bdb9-82e6a25d7c9c-kube-api-access-x6fbx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.832092 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.832151 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert podName:ef6032ac-99cd-4ac4-899b-74a9e3b53949 nodeName:}" failed. No retries permitted until 2026-01-21 15:42:50.832132884 +0000 UTC m=+1002.522839148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert") pod "infra-operator-controller-manager-77c48c7859-zk9pf" (UID: "ef6032ac-99cd-4ac4-899b-74a9e3b53949") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.888239 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.891042 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.897495 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.916911 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.919210 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.933447 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.933529 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6fbx\" (UniqueName: \"kubernetes.io/projected/23645bd3-1829-4740-bdb9-82e6a25d7c9c-kube-api-access-x6fbx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.933610 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8tbq\" (UniqueName: \"kubernetes.io/projected/d42979af-89f0-4c90-9764-a1bbc4429b2b-kube-api-access-x8tbq\") pod \"ovn-operator-controller-manager-55db956ddc-lmdr4\" (UID: \"d42979af-89f0-4c90-9764-a1bbc4429b2b\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.933615 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.933701 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert podName:23645bd3-1829-4740-bdb9-82e6a25d7c9c nodeName:}" failed. No retries permitted until 2026-01-21 15:42:50.433682157 +0000 UTC m=+1002.124388421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" (UID: "23645bd3-1829-4740-bdb9-82e6a25d7c9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.949452 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.958031 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.990895 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.991984 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.004365 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-z95dr" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.004751 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zmxsx" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.023921 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8tbq\" (UniqueName: \"kubernetes.io/projected/d42979af-89f0-4c90-9764-a1bbc4429b2b-kube-api-access-x8tbq\") pod \"ovn-operator-controller-manager-55db956ddc-lmdr4\" (UID: \"d42979af-89f0-4c90-9764-a1bbc4429b2b\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.039028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6fbx\" (UniqueName: \"kubernetes.io/projected/23645bd3-1829-4740-bdb9-82e6a25d7c9c-kube-api-access-x6fbx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.039871 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.040017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbfpr\" (UniqueName: \"kubernetes.io/projected/30f88e7d-645a-4b19-bafd-05ba8bb11914-kube-api-access-gbfpr\") pod \"placement-operator-controller-manager-686df47fcb-jtj62\" (UID: \"30f88e7d-645a-4b19-bafd-05ba8bb11914\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.040695 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.059984 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.065235 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-q8zfr" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.079967 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.154029 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrr8x\" (UniqueName: \"kubernetes.io/projected/8b8f2c9e-6151-4006-922f-dabaa3a79ddd-kube-api-access-vrr8x\") pod \"telemetry-operator-controller-manager-5f8f495fcf-r5nns\" (UID: \"8b8f2c9e-6151-4006-922f-dabaa3a79ddd\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.154275 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbfpr\" (UniqueName: \"kubernetes.io/projected/30f88e7d-645a-4b19-bafd-05ba8bb11914-kube-api-access-gbfpr\") pod \"placement-operator-controller-manager-686df47fcb-jtj62\" (UID: \"30f88e7d-645a-4b19-bafd-05ba8bb11914\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.154392 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r655x\" (UniqueName: \"kubernetes.io/projected/1a751a90-6eaf-445b-8d90-f97d65684393-kube-api-access-r655x\") pod \"swift-operator-controller-manager-85dd56d4cc-pljxf\" (UID: \"1a751a90-6eaf-445b-8d90-f97d65684393\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.155140 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.155252 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.156323 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.180296 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9xwj5" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.188521 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.190338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbfpr\" (UniqueName: \"kubernetes.io/projected/30f88e7d-645a-4b19-bafd-05ba8bb11914-kube-api-access-gbfpr\") pod \"placement-operator-controller-manager-686df47fcb-jtj62\" (UID: \"30f88e7d-645a-4b19-bafd-05ba8bb11914\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.203652 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-c458w"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.204707 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-c458w"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.204801 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.211288 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c886n" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.254227 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.255989 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5fxr\" (UniqueName: \"kubernetes.io/projected/e47f3183-b43e-4910-b383-b6b674104aee-kube-api-access-h5fxr\") pod \"test-operator-controller-manager-7cd8bc9dbb-qcl6m\" (UID: \"e47f3183-b43e-4910-b383-b6b674104aee\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.256044 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrr8x\" (UniqueName: \"kubernetes.io/projected/8b8f2c9e-6151-4006-922f-dabaa3a79ddd-kube-api-access-vrr8x\") pod \"telemetry-operator-controller-manager-5f8f495fcf-r5nns\" (UID: \"8b8f2c9e-6151-4006-922f-dabaa3a79ddd\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.256077 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r655x\" (UniqueName: \"kubernetes.io/projected/1a751a90-6eaf-445b-8d90-f97d65684393-kube-api-access-r655x\") pod \"swift-operator-controller-manager-85dd56d4cc-pljxf\" (UID: \"1a751a90-6eaf-445b-8d90-f97d65684393\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.343040 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r655x\" (UniqueName: \"kubernetes.io/projected/1a751a90-6eaf-445b-8d90-f97d65684393-kube-api-access-r655x\") pod \"swift-operator-controller-manager-85dd56d4cc-pljxf\" (UID: \"1a751a90-6eaf-445b-8d90-f97d65684393\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.345403 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrr8x\" (UniqueName: \"kubernetes.io/projected/8b8f2c9e-6151-4006-922f-dabaa3a79ddd-kube-api-access-vrr8x\") pod \"telemetry-operator-controller-manager-5f8f495fcf-r5nns\" (UID: \"8b8f2c9e-6151-4006-922f-dabaa3a79ddd\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.370775 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g7nl\" (UniqueName: \"kubernetes.io/projected/a508acc2-8e44-462f-a06a-9ae09a853f5a-kube-api-access-7g7nl\") pod \"watcher-operator-controller-manager-64cd966744-c458w\" (UID: \"a508acc2-8e44-462f-a06a-9ae09a853f5a\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.370906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5fxr\" (UniqueName: \"kubernetes.io/projected/e47f3183-b43e-4910-b383-b6b674104aee-kube-api-access-h5fxr\") pod \"test-operator-controller-manager-7cd8bc9dbb-qcl6m\" (UID: \"e47f3183-b43e-4910-b383-b6b674104aee\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.371305 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.401578 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.402470 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.405419 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.408565 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5fxr\" (UniqueName: \"kubernetes.io/projected/e47f3183-b43e-4910-b383-b6b674104aee-kube-api-access-h5fxr\") pod \"test-operator-controller-manager-7cd8bc9dbb-qcl6m\" (UID: \"e47f3183-b43e-4910-b383-b6b674104aee\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.419259 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.422202 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mm7j6" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.422440 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.422787 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.485856 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g7nl\" (UniqueName: \"kubernetes.io/projected/a508acc2-8e44-462f-a06a-9ae09a853f5a-kube-api-access-7g7nl\") pod \"watcher-operator-controller-manager-64cd966744-c458w\" (UID: \"a508acc2-8e44-462f-a06a-9ae09a853f5a\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.486253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.486401 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.486447 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert podName:23645bd3-1829-4740-bdb9-82e6a25d7c9c nodeName:}" failed. No retries permitted until 2026-01-21 15:42:51.486433937 +0000 UTC m=+1003.177140201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" (UID: "23645bd3-1829-4740-bdb9-82e6a25d7c9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.531439 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g7nl\" (UniqueName: \"kubernetes.io/projected/a508acc2-8e44-462f-a06a-9ae09a853f5a-kube-api-access-7g7nl\") pod \"watcher-operator-controller-manager-64cd966744-c458w\" (UID: \"a508acc2-8e44-462f-a06a-9ae09a853f5a\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.584887 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.585896 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.590954 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b75ml\" (UniqueName: \"kubernetes.io/projected/76514973-bbd4-4c59-9c31-be5df2dbc2d3-kube-api-access-b75ml\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4jj56\" (UID: \"76514973-bbd4-4c59-9c31-be5df2dbc2d3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.590998 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.591038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.591100 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qkn\" (UniqueName: \"kubernetes.io/projected/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-kube-api-access-25qkn\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.593586 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-l9kt6" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.632357 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.633446 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.653935 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.654244 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.693554 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25qkn\" (UniqueName: \"kubernetes.io/projected/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-kube-api-access-25qkn\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.693641 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b75ml\" (UniqueName: \"kubernetes.io/projected/76514973-bbd4-4c59-9c31-be5df2dbc2d3-kube-api-access-b75ml\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4jj56\" (UID: \"76514973-bbd4-4c59-9c31-be5df2dbc2d3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.693665 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.693700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.693878 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.693921 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:51.193906763 +0000 UTC m=+1002.884613017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.694372 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.694403 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:51.194395197 +0000 UTC m=+1002.885101451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.724073 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.725520 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25qkn\" (UniqueName: \"kubernetes.io/projected/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-kube-api-access-25qkn\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.758900 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b75ml\" (UniqueName: \"kubernetes.io/projected/76514973-bbd4-4c59-9c31-be5df2dbc2d3-kube-api-access-b75ml\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4jj56\" (UID: \"76514973-bbd4-4c59-9c31-be5df2dbc2d3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.773788 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" event={"ID":"ee924d67-3bf6-48e6-b378-244e5912ccf1","Type":"ContainerStarted","Data":"9be47884ad7dc4a15c59d2061617c3917746870932b64383a93b8dcf280149eb"} Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.902801 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.904910 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.904959 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert podName:ef6032ac-99cd-4ac4-899b-74a9e3b53949 nodeName:}" failed. No retries permitted until 2026-01-21 15:42:52.904944996 +0000 UTC m=+1004.595651260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert") pod "infra-operator-controller-manager-77c48c7859-zk9pf" (UID: "ef6032ac-99cd-4ac4-899b-74a9e3b53949") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.909336 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.933344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.980410 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.214770 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.214852 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.214955 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.214991 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.215003 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:52.214988784 +0000 UTC m=+1003.905695048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.215100 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:52.215076777 +0000 UTC m=+1003.905783141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.279745 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.315148 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-h45sn"] Jan 21 15:42:51 crc kubenswrapper[4739]: W0121 15:42:51.328122 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-de58d2ced053037bf9ea3c71107cd2bdd486343b932dbbb5331bfc231db0a6b5 WatchSource:0}: Error finding container de58d2ced053037bf9ea3c71107cd2bdd486343b932dbbb5331bfc231db0a6b5: Status 404 returned error can't find the container with id de58d2ced053037bf9ea3c71107cd2bdd486343b932dbbb5331bfc231db0a6b5 Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.505170 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.531148 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.531289 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.531337 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert podName:23645bd3-1829-4740-bdb9-82e6a25d7c9c nodeName:}" failed. No retries permitted until 2026-01-21 15:42:53.531320963 +0000 UTC m=+1005.222027227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" (UID: "23645bd3-1829-4740-bdb9-82e6a25d7c9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.548936 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.598782 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.722383 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.734680 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.740859 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.755622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b"] Jan 21 15:42:51 crc kubenswrapper[4739]: W0121 15:42:51.757353 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e1c82f_0872_46ed_b8c7_f54328ee947d.slice/crio-26f9a8bc36d0bac0388795785b8e2a380a4b68b2947dab60e6ab060392fef107 WatchSource:0}: Error finding container 26f9a8bc36d0bac0388795785b8e2a380a4b68b2947dab60e6ab060392fef107: Status 404 returned error can't find the container with id 26f9a8bc36d0bac0388795785b8e2a380a4b68b2947dab60e6ab060392fef107 Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.769457 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.786003 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.799180 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" event={"ID":"52d40272-2ec5-451f-9c41-339c2859d40f","Type":"ContainerStarted","Data":"1e0b705db284ea08aa86976a8201ae0262a42dab07c3deddebbe308cdc99df53"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.801365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" event={"ID":"30f88e7d-645a-4b19-bafd-05ba8bb11914","Type":"ContainerStarted","Data":"1efe1932400f7d22c1efab16da6988c3b2bf85f71486f0912f79ba21a828bdcd"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.803803 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" event={"ID":"031e8a3d-8560-4f90-a4ee-9303509dc643","Type":"ContainerStarted","Data":"e03da793fb6310dfc898d0bbb0eb4e4878dd5cae1f37ce87a7cb2ccc7ceaded9"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.805684 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" event={"ID":"f6e1c82f-0872-46ed-b8c7-f54328ee947d","Type":"ContainerStarted","Data":"26f9a8bc36d0bac0388795785b8e2a380a4b68b2947dab60e6ab060392fef107"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.806688 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" event={"ID":"6be2175b-8e2d-48d5-938e-e729cb3ac784","Type":"ContainerStarted","Data":"4f521fd960f16c0c2b84438fa8e0ee075b920a5f11178127f1ba30014ad84b30"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.807533 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" event={"ID":"5dcd510c-acad-453b-9777-dfaa2513eef8","Type":"ContainerStarted","Data":"de58d2ced053037bf9ea3c71107cd2bdd486343b932dbbb5331bfc231db0a6b5"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.809399 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" event={"ID":"83d3bc4f-4498-4f3f-ac28-5832348b73a9","Type":"ContainerStarted","Data":"435b5998b2c9279e80b5e4d23f41c13ae3f10d29fdb24975d3c7e86743921c5a"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.810541 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" event={"ID":"142b0baa-2c17-4e40-b473-7251e3fefddd","Type":"ContainerStarted","Data":"d679015e50edc7f0b3d675b5d9b8c2b6b81ee1ef48f523bd29e8fc249e3f991c"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.813023 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" event={"ID":"c14851f1-903f-4792-93bf-2c147370f312","Type":"ContainerStarted","Data":"05d93a1e7c3e0cce38f3ce6c90a341cd504af9670dd2d6ef028d1989d107b415"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.814106 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" event={"ID":"22ce2630-c747-40f4-8f8b-62414689534b","Type":"ContainerStarted","Data":"19d595bada84876482f01a2c62141bac832492be936bdcd635576e26256891c5"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.815010 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" event={"ID":"b4ea78b8-c892-42e6-b39b-51d33fdac25a","Type":"ContainerStarted","Data":"dbda744b2bb5f5076c28f2e7fab43d48ad12eca8cbe3ce35b39c0ab84d9503a2"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.939871 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-c458w"] Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.977044 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g7nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-c458w_openstack-operators(a508acc2-8e44-462f-a06a-9ae09a853f5a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.977475 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4"] Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.981913 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.996736 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vrr8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-r5nns_openstack-operators(8b8f2c9e-6151-4006-922f-dabaa3a79ddd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.997979 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.998893 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf"] Jan 21 15:42:52 crc kubenswrapper[4739]: W0121 15:42:52.000802 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47f3183_b43e_4910_b383_b6b674104aee.slice/crio-de1da05ae13fcd88f06135eadfeed4cf06e3829acd41c83fca202807bff1acaf WatchSource:0}: Error finding container de1da05ae13fcd88f06135eadfeed4cf06e3829acd41c83fca202807bff1acaf: Status 404 returned error can't find the container with id de1da05ae13fcd88f06135eadfeed4cf06e3829acd41c83fca202807bff1acaf Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.003016 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5fxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-qcl6m_openstack-operators(e47f3183-b43e-4910-b383-b6b674104aee): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:42:52 crc kubenswrapper[4739]: W0121 15:42:52.003420 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a751a90_6eaf_445b_8d90_f97d65684393.slice/crio-5adf20d4a935e9a76ee908c79b84e59b621c07cd21c25db00b293678b717be0b WatchSource:0}: Error finding container 5adf20d4a935e9a76ee908c79b84e59b621c07cd21c25db00b293678b717be0b: Status 404 returned error can't find the container with id 5adf20d4a935e9a76ee908c79b84e59b621c07cd21c25db00b293678b717be0b Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.004204 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.005854 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r655x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-pljxf_openstack-operators(1a751a90-6eaf-445b-8d90-f97d65684393): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.007431 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.011258 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m"] Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.015624 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns"] Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.196495 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56"] Jan 21 15:42:52 crc kubenswrapper[4739]: W0121 15:42:52.201158 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76514973_bbd4_4c59_9c31_be5df2dbc2d3.slice/crio-d06daa93f03a09a17362aef87df0496c3af58980e6f646abb7f1c56bae7c404c WatchSource:0}: Error finding container d06daa93f03a09a17362aef87df0496c3af58980e6f646abb7f1c56bae7c404c: Status 404 returned error can't find the container with id d06daa93f03a09a17362aef87df0496c3af58980e6f646abb7f1c56bae7c404c Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.248694 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.248760 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.248936 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.248942 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.249000 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:54.248982963 +0000 UTC m=+1005.939689227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.249019 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:54.249011294 +0000 UTC m=+1005.939717558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.824381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" event={"ID":"e47f3183-b43e-4910-b383-b6b674104aee","Type":"ContainerStarted","Data":"de1da05ae13fcd88f06135eadfeed4cf06e3829acd41c83fca202807bff1acaf"} Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.828253 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.831277 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" event={"ID":"a508acc2-8e44-462f-a06a-9ae09a853f5a","Type":"ContainerStarted","Data":"f57a62176de06712af0cae0e6f0ec3f605467f7d5bc627bdb88b85ea14864c5b"} Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.836186 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.837753 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" event={"ID":"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc","Type":"ContainerStarted","Data":"f5afece6ac6108cc445fe98617faf8dfab72b3731a59c743ed11648ad0f0687f"} Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.838869 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" event={"ID":"d42979af-89f0-4c90-9764-a1bbc4429b2b","Type":"ContainerStarted","Data":"18696c2a1efa40e45ecd566fb0070883b79c1bb641928b08237a93798acbfea0"} Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.840120 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" event={"ID":"1a751a90-6eaf-445b-8d90-f97d65684393","Type":"ContainerStarted","Data":"5adf20d4a935e9a76ee908c79b84e59b621c07cd21c25db00b293678b717be0b"} Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.852336 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.853276 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" event={"ID":"8b8f2c9e-6151-4006-922f-dabaa3a79ddd","Type":"ContainerStarted","Data":"9c83274a0a079591a096fa958b66419a5567910a0b7e6e1e130cc50019879367"} Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.854220 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.854750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" event={"ID":"4c4bf693-865f-4d6d-ba43-d37a43a2faa0","Type":"ContainerStarted","Data":"62c548a4629ef2494ffadc326a973348516df73cb0c0d126b2e5d7439dfd4a8c"} Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.858057 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" event={"ID":"76514973-bbd4-4c59-9c31-be5df2dbc2d3","Type":"ContainerStarted","Data":"d06daa93f03a09a17362aef87df0496c3af58980e6f646abb7f1c56bae7c404c"} Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.959383 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.959797 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.959877 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert podName:ef6032ac-99cd-4ac4-899b-74a9e3b53949 nodeName:}" failed. No retries permitted until 2026-01-21 15:42:56.959857511 +0000 UTC m=+1008.650563785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert") pod "infra-operator-controller-manager-77c48c7859-zk9pf" (UID: "ef6032ac-99cd-4ac4-899b-74a9e3b53949") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:53 crc kubenswrapper[4739]: I0121 15:42:53.569614 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.569786 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.569863 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert podName:23645bd3-1829-4740-bdb9-82e6a25d7c9c nodeName:}" failed. No retries permitted until 2026-01-21 15:42:57.569848331 +0000 UTC m=+1009.260554595 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" (UID: "23645bd3-1829-4740-bdb9-82e6a25d7c9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.875202 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.875534 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.875574 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.875603 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" Jan 21 15:42:54 crc kubenswrapper[4739]: I0121 15:42:54.279910 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:54 crc kubenswrapper[4739]: I0121 15:42:54.279970 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:54 crc kubenswrapper[4739]: E0121 15:42:54.280109 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:42:54 crc kubenswrapper[4739]: E0121 15:42:54.280124 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:42:54 crc kubenswrapper[4739]: E0121 15:42:54.280192 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:58.280163824 +0000 UTC m=+1009.970870098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:42:54 crc kubenswrapper[4739]: E0121 15:42:54.280212 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:58.280204085 +0000 UTC m=+1009.970910349 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:42:57 crc kubenswrapper[4739]: I0121 15:42:57.041775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:57 crc kubenswrapper[4739]: E0121 15:42:57.042179 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:57 crc kubenswrapper[4739]: E0121 15:42:57.042320 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert podName:ef6032ac-99cd-4ac4-899b-74a9e3b53949 nodeName:}" failed. No retries permitted until 2026-01-21 15:43:05.042301146 +0000 UTC m=+1016.733007410 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert") pod "infra-operator-controller-manager-77c48c7859-zk9pf" (UID: "ef6032ac-99cd-4ac4-899b-74a9e3b53949") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:57 crc kubenswrapper[4739]: I0121 15:42:57.649259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:57 crc kubenswrapper[4739]: E0121 15:42:57.649471 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:57 crc kubenswrapper[4739]: E0121 15:42:57.649518 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert podName:23645bd3-1829-4740-bdb9-82e6a25d7c9c nodeName:}" failed. No retries permitted until 2026-01-21 15:43:05.649503461 +0000 UTC m=+1017.340209725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" (UID: "23645bd3-1829-4740-bdb9-82e6a25d7c9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:58 crc kubenswrapper[4739]: I0121 15:42:58.360724 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:58 crc kubenswrapper[4739]: I0121 15:42:58.360799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:58 crc kubenswrapper[4739]: E0121 15:42:58.360941 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:42:58 crc kubenswrapper[4739]: E0121 15:42:58.361025 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:43:06.361003545 +0000 UTC m=+1018.051709899 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:42:58 crc kubenswrapper[4739]: E0121 15:42:58.360941 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:42:58 crc kubenswrapper[4739]: E0121 15:42:58.362211 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:43:06.362198787 +0000 UTC m=+1018.052905161 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.082295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.094880 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.153018 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.689787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.694864 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.923185 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:43:06 crc kubenswrapper[4739]: I0121 15:43:06.398611 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:06 crc kubenswrapper[4739]: I0121 15:43:06.398973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:06 crc kubenswrapper[4739]: E0121 15:43:06.398770 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:43:06 crc kubenswrapper[4739]: E0121 15:43:06.399072 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:43:06 crc kubenswrapper[4739]: E0121 15:43:06.399100 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:43:22.399083377 +0000 UTC m=+1034.089789631 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:43:06 crc kubenswrapper[4739]: E0121 15:43:06.399118 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:43:22.399109788 +0000 UTC m=+1034.089816052 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:43:08 crc kubenswrapper[4739]: E0121 15:43:08.688794 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737" Jan 21 15:43:08 crc kubenswrapper[4739]: E0121 15:43:08.689632 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gbfpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-jtj62_openstack-operators(30f88e7d-645a-4b19-bafd-05ba8bb11914): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:08 crc kubenswrapper[4739]: E0121 15:43:08.691032 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" Jan 21 15:43:08 crc kubenswrapper[4739]: E0121 15:43:08.989086 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.124613 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.125467 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fzvbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-j4f2g_openstack-operators(4c4bf693-865f-4d6d-ba43-d37a43a2faa0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.126935 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.828361 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.828608 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ml27v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-78757b4889-rf69b_openstack-operators(f6e1c82f-0872-46ed-b8c7-f54328ee947d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.829965 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" Jan 21 15:43:16 crc kubenswrapper[4739]: E0121 15:43:16.049848 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" Jan 21 15:43:16 crc kubenswrapper[4739]: E0121 15:43:16.050199 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.464736 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.465281 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qbq8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-p74fm_openstack-operators(031e8a3d-8560-4f90-a4ee-9303509dc643): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.466896 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.768901 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.769106 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7dpwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-phbcl_openstack-operators(ee924d67-3bf6-48e6-b378-244e5912ccf1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.770387 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" Jan 21 15:43:21 crc kubenswrapper[4739]: E0121 15:43:21.094799 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" Jan 21 15:43:21 crc kubenswrapper[4739]: E0121 15:43:21.095283 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.443408 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.443715 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.451550 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.452695 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.562611 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mm7j6" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.571717 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:22 crc kubenswrapper[4739]: E0121 15:43:22.717905 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 15:43:22 crc kubenswrapper[4739]: E0121 15:43:22.718143 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8fx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-x8qlx_openstack-operators(83d3bc4f-4498-4f3f-ac28-5832348b73a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:22 crc kubenswrapper[4739]: E0121 15:43:22.719372 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" Jan 21 15:43:23 crc kubenswrapper[4739]: E0121 15:43:23.109284 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" Jan 21 15:43:23 crc kubenswrapper[4739]: E0121 15:43:23.292143 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028" Jan 21 15:43:23 crc kubenswrapper[4739]: E0121 15:43:23.292407 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f67t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-c6994669c-h45sn_openstack-operators(5dcd510c-acad-453b-9777-dfaa2513eef8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:23 crc kubenswrapper[4739]: E0121 15:43:23.294784 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" Jan 21 15:43:24 crc kubenswrapper[4739]: E0121 15:43:24.119030 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028\\\"\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" Jan 21 15:43:24 crc kubenswrapper[4739]: E0121 15:43:24.284598 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488" Jan 21 15:43:24 crc kubenswrapper[4739]: E0121 15:43:24.284803 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dz594,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-9b68f5989-p94b8_openstack-operators(c14851f1-903f-4792-93bf-2c147370f312): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:24 crc kubenswrapper[4739]: E0121 15:43:24.286212 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" Jan 21 15:43:25 crc kubenswrapper[4739]: E0121 15:43:25.125754 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" Jan 21 15:43:27 crc kubenswrapper[4739]: E0121 15:43:27.796263 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 21 15:43:27 crc kubenswrapper[4739]: E0121 15:43:27.797991 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j274z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-gdj28_openstack-operators(b4ea78b8-c892-42e6-b39b-51d33fdac25a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:27 crc kubenswrapper[4739]: E0121 15:43:27.799223 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" Jan 21 15:43:28 crc kubenswrapper[4739]: E0121 15:43:28.145072 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" Jan 21 15:43:28 crc kubenswrapper[4739]: E0121 15:43:28.358531 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 21 15:43:28 crc kubenswrapper[4739]: E0121 15:43:28.358706 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qgcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-5pbdz_openstack-operators(4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:28 crc kubenswrapper[4739]: E0121 15:43:28.359927 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" Jan 21 15:43:29 crc kubenswrapper[4739]: E0121 15:43:29.153381 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.286171 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.286451 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhkwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-lk4sx_openstack-operators(6be2175b-8e2d-48d5-938e-e729cb3ac784): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.287675 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.764788 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.764998 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zbpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-zzrjd_openstack-operators(142b0baa-2c17-4e40-b473-7251e3fefddd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.766650 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" Jan 21 15:43:31 crc kubenswrapper[4739]: E0121 15:43:31.164330 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" Jan 21 15:43:31 crc kubenswrapper[4739]: E0121 15:43:31.166700 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" Jan 21 15:43:37 crc kubenswrapper[4739]: E0121 15:43:37.082693 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad" Jan 21 15:43:37 crc kubenswrapper[4739]: E0121 15:43:37.084053 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g7nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-c458w_openstack-operators(a508acc2-8e44-462f-a06a-9ae09a853f5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:37 crc kubenswrapper[4739]: E0121 15:43:37.086628 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.277374 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.277902 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5fxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-qcl6m_openstack-operators(e47f3183-b43e-4910-b383-b6b674104aee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.280016 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.806113 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.806347 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vrr8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-r5nns_openstack-operators(8b8f2c9e-6151-4006-922f-dabaa3a79ddd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.807591 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" Jan 21 15:43:41 crc kubenswrapper[4739]: E0121 15:43:41.552976 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92" Jan 21 15:43:41 crc kubenswrapper[4739]: E0121 15:43:41.553230 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r655x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-pljxf_openstack-operators(1a751a90-6eaf-445b-8d90-f97d65684393): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:41 crc kubenswrapper[4739]: E0121 15:43:41.554582 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" Jan 21 15:43:42 crc kubenswrapper[4739]: E0121 15:43:42.299210 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 15:43:42 crc kubenswrapper[4739]: E0121 15:43:42.299848 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsnfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-cnccn_openstack-operators(22ce2630-c747-40f4-8f8b-62414689534b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:42 crc kubenswrapper[4739]: E0121 15:43:42.301283 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" Jan 21 15:43:43 crc kubenswrapper[4739]: E0121 15:43:43.241606 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" Jan 21 15:43:43 crc kubenswrapper[4739]: E0121 15:43:43.294585 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 15:43:43 crc kubenswrapper[4739]: E0121 15:43:43.294747 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b75ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4jj56_openstack-operators(76514973-bbd4-4c59-9c31-be5df2dbc2d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:43 crc kubenswrapper[4739]: E0121 15:43:43.295981 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" podUID="76514973-bbd4-4c59-9c31-be5df2dbc2d3" Jan 21 15:43:43 crc kubenswrapper[4739]: I0121 15:43:43.889918 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w"] Jan 21 15:43:43 crc kubenswrapper[4739]: W0121 15:43:43.998291 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef6032ac_99cd_4ac4_899b_74a9e3b53949.slice/crio-9dbc5464326606e84b880c22ef75e1d6136088dcd9ff370e080a8c7e28be95e3 WatchSource:0}: Error finding container 9dbc5464326606e84b880c22ef75e1d6136088dcd9ff370e080a8c7e28be95e3: Status 404 returned error can't find the container with id 9dbc5464326606e84b880c22ef75e1d6136088dcd9ff370e080a8c7e28be95e3 Jan 21 15:43:43 crc kubenswrapper[4739]: I0121 15:43:43.998594 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf"] Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.012593 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4"] Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.246360 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" event={"ID":"80f04548-9a1c-4ad8-b6f5-0195c1def7fc","Type":"ContainerStarted","Data":"4885d142c7d0268ab38f16d745925c76a622ffc8b081db3fad7f74578efa615a"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.248275 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" event={"ID":"4c4bf693-865f-4d6d-ba43-d37a43a2faa0","Type":"ContainerStarted","Data":"59f90a1e856ec85f5b9c34c45740e95e25dc66d3ce07972bf5c2823878e6c067"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.248438 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.249843 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" event={"ID":"83d3bc4f-4498-4f3f-ac28-5832348b73a9","Type":"ContainerStarted","Data":"b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.250021 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.251309 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" event={"ID":"ee924d67-3bf6-48e6-b378-244e5912ccf1","Type":"ContainerStarted","Data":"689e35d979e44be8c997b71c85c8dec41de3f14d82d1466eccdd56b0126c3317"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.251509 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.253072 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" event={"ID":"f6e1c82f-0872-46ed-b8c7-f54328ee947d","Type":"ContainerStarted","Data":"a14c631b2eddcd6a4e35981fa0101b812cd33baa1b1a1d3515bdd7ce8e25bcc6"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.253232 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.254353 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" event={"ID":"b4ea78b8-c892-42e6-b39b-51d33fdac25a","Type":"ContainerStarted","Data":"ff20b00af6dc8903efbe043bcf6618b0b85d91e27520c3a4a3cdfd427f9643c9"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.254518 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.255549 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" event={"ID":"5dcd510c-acad-453b-9777-dfaa2513eef8","Type":"ContainerStarted","Data":"b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.255699 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.257576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" event={"ID":"031e8a3d-8560-4f90-a4ee-9303509dc643","Type":"ContainerStarted","Data":"532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.257797 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.260546 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" event={"ID":"d42979af-89f0-4c90-9764-a1bbc4429b2b","Type":"ContainerStarted","Data":"56539faabbd3d4d4eab45e9ad3daeab93d2b7d0abf537e7ed210cb911f7fa84d"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.260828 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.261387 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" event={"ID":"23645bd3-1829-4740-bdb9-82e6a25d7c9c","Type":"ContainerStarted","Data":"69fb0a0b620ccf5eb3d67a99415e24cd6b1015a2628e54ed23efc75da017fc33"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.262750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" event={"ID":"52d40272-2ec5-451f-9c41-339c2859d40f","Type":"ContainerStarted","Data":"d1ff82b8075d75093dcad7bd26d722398c3cbddf2b6318e861002f179b1f602e"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.262939 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.264478 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" event={"ID":"ef6032ac-99cd-4ac4-899b-74a9e3b53949","Type":"ContainerStarted","Data":"9dbc5464326606e84b880c22ef75e1d6136088dcd9ff370e080a8c7e28be95e3"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.265797 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" event={"ID":"c14851f1-903f-4792-93bf-2c147370f312","Type":"ContainerStarted","Data":"1e033baa1b8b01aa12bcf719a520f8bf692e52bf637c994ab95df80c895f137f"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.266040 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.267721 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" event={"ID":"30f88e7d-645a-4b19-bafd-05ba8bb11914","Type":"ContainerStarted","Data":"f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.268199 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:43:44 crc kubenswrapper[4739]: E0121 15:43:44.269070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" podUID="76514973-bbd4-4c59-9c31-be5df2dbc2d3" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.337843 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podStartSLOduration=3.810713273 podStartE2EDuration="55.337826165s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.795188528 +0000 UTC m=+1003.485894792" lastFinishedPulling="2026-01-21 15:43:43.32230142 +0000 UTC m=+1055.013007684" observedRunningTime="2026-01-21 15:43:44.30090624 +0000 UTC m=+1055.991612504" watchObservedRunningTime="2026-01-21 15:43:44.337826165 +0000 UTC m=+1056.028532429" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.385986 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podStartSLOduration=4.069067991 podStartE2EDuration="56.385966235s" podCreationTimestamp="2026-01-21 15:42:48 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.050156724 +0000 UTC m=+1002.740862988" lastFinishedPulling="2026-01-21 15:43:43.367054968 +0000 UTC m=+1055.057761232" observedRunningTime="2026-01-21 15:43:44.382641825 +0000 UTC m=+1056.073348099" watchObservedRunningTime="2026-01-21 15:43:44.385966235 +0000 UTC m=+1056.076672499" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.404535 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podStartSLOduration=3.850209789 podStartE2EDuration="55.40451263s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.76912593 +0000 UTC m=+1003.459832194" lastFinishedPulling="2026-01-21 15:43:43.323428771 +0000 UTC m=+1055.014135035" observedRunningTime="2026-01-21 15:43:44.401400915 +0000 UTC m=+1056.092107179" watchObservedRunningTime="2026-01-21 15:43:44.40451263 +0000 UTC m=+1056.095218904" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.440892 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podStartSLOduration=4.361013613 podStartE2EDuration="56.44087002s" podCreationTimestamp="2026-01-21 15:42:48 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.293447142 +0000 UTC m=+1002.984153406" lastFinishedPulling="2026-01-21 15:43:43.373303549 +0000 UTC m=+1055.064009813" observedRunningTime="2026-01-21 15:43:44.435777312 +0000 UTC m=+1056.126483576" watchObservedRunningTime="2026-01-21 15:43:44.44087002 +0000 UTC m=+1056.131576284" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.478383 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podStartSLOduration=3.157475711 podStartE2EDuration="55.478360961s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.107036726 +0000 UTC m=+1002.797742990" lastFinishedPulling="2026-01-21 15:43:43.427921976 +0000 UTC m=+1055.118628240" observedRunningTime="2026-01-21 15:43:44.474153096 +0000 UTC m=+1056.164859360" watchObservedRunningTime="2026-01-21 15:43:44.478360961 +0000 UTC m=+1056.169067225" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.514207 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podStartSLOduration=3.934211817 podStartE2EDuration="55.514184976s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.766755727 +0000 UTC m=+1003.457461991" lastFinishedPulling="2026-01-21 15:43:43.346728886 +0000 UTC m=+1055.037435150" observedRunningTime="2026-01-21 15:43:44.5103135 +0000 UTC m=+1056.201019774" watchObservedRunningTime="2026-01-21 15:43:44.514184976 +0000 UTC m=+1056.204891240" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.551073 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podStartSLOduration=4.000398008 podStartE2EDuration="55.551054219s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.771539766 +0000 UTC m=+1003.462246030" lastFinishedPulling="2026-01-21 15:43:43.322195977 +0000 UTC m=+1055.012902241" observedRunningTime="2026-01-21 15:43:44.544749157 +0000 UTC m=+1056.235455431" watchObservedRunningTime="2026-01-21 15:43:44.551054219 +0000 UTC m=+1056.241760483" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.573969 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podStartSLOduration=17.280347834 podStartE2EDuration="55.573944412s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.976154475 +0000 UTC m=+1003.666860739" lastFinishedPulling="2026-01-21 15:43:30.269751053 +0000 UTC m=+1041.960457317" observedRunningTime="2026-01-21 15:43:44.568600277 +0000 UTC m=+1056.259306551" watchObservedRunningTime="2026-01-21 15:43:44.573944412 +0000 UTC m=+1056.264650686" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.694746 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podStartSLOduration=4.6183971150000005 podStartE2EDuration="56.69472579s" podCreationTimestamp="2026-01-21 15:42:48 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.338240847 +0000 UTC m=+1003.028947111" lastFinishedPulling="2026-01-21 15:43:43.414569522 +0000 UTC m=+1055.105275786" observedRunningTime="2026-01-21 15:43:44.694589376 +0000 UTC m=+1056.385295660" watchObservedRunningTime="2026-01-21 15:43:44.69472579 +0000 UTC m=+1056.385432054" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.753669 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podStartSLOduration=7.234281995 podStartE2EDuration="55.753641165s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.745269204 +0000 UTC m=+1003.435975468" lastFinishedPulling="2026-01-21 15:43:40.264628364 +0000 UTC m=+1051.955334638" observedRunningTime="2026-01-21 15:43:44.750016765 +0000 UTC m=+1056.440723029" watchObservedRunningTime="2026-01-21 15:43:44.753641165 +0000 UTC m=+1056.444347449" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.872866 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podStartSLOduration=4.136343585 podStartE2EDuration="56.872719846s" podCreationTimestamp="2026-01-21 15:42:48 +0000 UTC" firstStartedPulling="2026-01-21 15:42:50.633059923 +0000 UTC m=+1002.323766187" lastFinishedPulling="2026-01-21 15:43:43.369436164 +0000 UTC m=+1055.060142448" observedRunningTime="2026-01-21 15:43:44.836398557 +0000 UTC m=+1056.527104821" watchObservedRunningTime="2026-01-21 15:43:44.872719846 +0000 UTC m=+1056.563426110" Jan 21 15:43:45 crc kubenswrapper[4739]: I0121 15:43:45.276693 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" event={"ID":"80f04548-9a1c-4ad8-b6f5-0195c1def7fc","Type":"ContainerStarted","Data":"1744eb46c59128a839568716e29c2f180268cf0625cece36f3f0e6657f717e45"} Jan 21 15:43:45 crc kubenswrapper[4739]: I0121 15:43:45.280267 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:45 crc kubenswrapper[4739]: I0121 15:43:45.806533 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podStartSLOduration=55.806517406 podStartE2EDuration="55.806517406s" podCreationTimestamp="2026-01-21 15:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:43:45.324045642 +0000 UTC m=+1057.014751906" watchObservedRunningTime="2026-01-21 15:43:45.806517406 +0000 UTC m=+1057.497223670" Jan 21 15:43:47 crc kubenswrapper[4739]: E0121 15:43:47.784368 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.238187 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.252281 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.293773 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.410591 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.447182 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.594732 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.793648 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.826593 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.964261 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:43:50 crc kubenswrapper[4739]: I0121 15:43:50.063599 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:43:50 crc kubenswrapper[4739]: I0121 15:43:50.258087 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:43:50 crc kubenswrapper[4739]: E0121 15:43:50.785695 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" Jan 21 15:43:52 crc kubenswrapper[4739]: I0121 15:43:52.577571 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:53 crc kubenswrapper[4739]: E0121 15:43:53.785628 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" Jan 21 15:43:55 crc kubenswrapper[4739]: E0121 15:43:55.784639 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" Jan 21 15:43:59 crc kubenswrapper[4739]: E0121 15:43:59.642702 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:b262df0f889c0ffaa53e3c6c5f40356d2baf9a814f3c20a4ce9a2051f0597238" Jan 21 15:43:59 crc kubenswrapper[4739]: E0121 15:43:59.643274 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:b262df0f889c0ffaa53e3c6c5f40356d2baf9a814f3c20a4ce9a2051f0597238,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5gxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-77c48c7859-zk9pf_openstack-operators(ef6032ac-99cd-4ac4-899b-74a9e3b53949): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:59 crc kubenswrapper[4739]: E0121 15:43:59.644454 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" Jan 21 15:44:00 crc kubenswrapper[4739]: E0121 15:44:00.761149 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b262df0f889c0ffaa53e3c6c5f40356d2baf9a814f3c20a4ce9a2051f0597238\\\"\"" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.416674 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" event={"ID":"23645bd3-1829-4740-bdb9-82e6a25d7c9c","Type":"ContainerStarted","Data":"ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.418036 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" event={"ID":"6be2175b-8e2d-48d5-938e-e729cb3ac784","Type":"ContainerStarted","Data":"0af77460ab3bd447e9e009b13b82a8953c6d75007cd6e4916bfb576563bdfcbc"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.418406 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.419515 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" event={"ID":"76514973-bbd4-4c59-9c31-be5df2dbc2d3","Type":"ContainerStarted","Data":"1e4caceba08dee848b3952dbc5d98dabf22dc6b04eb6f350670775e624563cb1"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.421539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" event={"ID":"142b0baa-2c17-4e40-b473-7251e3fefddd","Type":"ContainerStarted","Data":"f6707b78785f560fb1916f7629aa9a7837dbe2be9499c11f9d45ee8a02758a6f"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.421734 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.423362 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" event={"ID":"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc","Type":"ContainerStarted","Data":"71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.423535 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.424653 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" event={"ID":"22ce2630-c747-40f4-8f8b-62414689534b","Type":"ContainerStarted","Data":"d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.424884 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.444534 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podStartSLOduration=5.245581035 podStartE2EDuration="1m14.444518528s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.571142262 +0000 UTC m=+1003.261848526" lastFinishedPulling="2026-01-21 15:44:00.770079755 +0000 UTC m=+1072.460786019" observedRunningTime="2026-01-21 15:44:03.440056177 +0000 UTC m=+1075.130762441" watchObservedRunningTime="2026-01-21 15:44:03.444518528 +0000 UTC m=+1075.135224792" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.462704 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" podStartSLOduration=3.999673355 podStartE2EDuration="1m13.462691783s" podCreationTimestamp="2026-01-21 15:42:50 +0000 UTC" firstStartedPulling="2026-01-21 15:42:52.203266813 +0000 UTC m=+1003.893973077" lastFinishedPulling="2026-01-21 15:44:01.666285241 +0000 UTC m=+1073.356991505" observedRunningTime="2026-01-21 15:44:03.459856596 +0000 UTC m=+1075.150562860" watchObservedRunningTime="2026-01-21 15:44:03.462691783 +0000 UTC m=+1075.153398047" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.498115 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podStartSLOduration=5.534779014 podStartE2EDuration="1m14.498100946s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.797757507 +0000 UTC m=+1003.488463771" lastFinishedPulling="2026-01-21 15:44:00.761079439 +0000 UTC m=+1072.451785703" observedRunningTime="2026-01-21 15:44:03.491886338 +0000 UTC m=+1075.182592602" watchObservedRunningTime="2026-01-21 15:44:03.498100946 +0000 UTC m=+1075.188807210" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.509276 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podStartSLOduration=4.478691372 podStartE2EDuration="1m14.50926227s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.624673733 +0000 UTC m=+1003.315379997" lastFinishedPulling="2026-01-21 15:44:01.655244631 +0000 UTC m=+1073.345950895" observedRunningTime="2026-01-21 15:44:03.506901176 +0000 UTC m=+1075.197607440" watchObservedRunningTime="2026-01-21 15:44:03.50926227 +0000 UTC m=+1075.199968534" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.527837 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podStartSLOduration=4.401882473 podStartE2EDuration="1m14.527803305s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.530372957 +0000 UTC m=+1003.221079221" lastFinishedPulling="2026-01-21 15:44:01.656293789 +0000 UTC m=+1073.347000053" observedRunningTime="2026-01-21 15:44:03.523321503 +0000 UTC m=+1075.214027767" watchObservedRunningTime="2026-01-21 15:44:03.527803305 +0000 UTC m=+1075.218509569" Jan 21 15:44:05 crc kubenswrapper[4739]: I0121 15:44:05.438654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" event={"ID":"e47f3183-b43e-4910-b383-b6b674104aee","Type":"ContainerStarted","Data":"fa4c0061b940dd7da20a79efc8e63bd544f9c5840c29e8af4c57c65a5abbc5ed"} Jan 21 15:44:05 crc kubenswrapper[4739]: I0121 15:44:05.439292 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:44:05 crc kubenswrapper[4739]: I0121 15:44:05.439321 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:44:05 crc kubenswrapper[4739]: I0121 15:44:05.467025 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podStartSLOduration=59.6095655 podStartE2EDuration="1m16.467005114s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:43:43.912657651 +0000 UTC m=+1055.603363915" lastFinishedPulling="2026-01-21 15:44:00.770097265 +0000 UTC m=+1072.460803529" observedRunningTime="2026-01-21 15:44:05.462217004 +0000 UTC m=+1077.152923278" watchObservedRunningTime="2026-01-21 15:44:05.467005114 +0000 UTC m=+1077.157711378" Jan 21 15:44:05 crc kubenswrapper[4739]: I0121 15:44:05.800086 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podStartSLOduration=3.624125301 podStartE2EDuration="1m16.80006563s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:52.002853089 +0000 UTC m=+1003.693559353" lastFinishedPulling="2026-01-21 15:44:05.178793428 +0000 UTC m=+1076.869499682" observedRunningTime="2026-01-21 15:44:05.483095052 +0000 UTC m=+1077.173801316" watchObservedRunningTime="2026-01-21 15:44:05.80006563 +0000 UTC m=+1077.490771894" Jan 21 15:44:06 crc kubenswrapper[4739]: I0121 15:44:06.445907 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" event={"ID":"1a751a90-6eaf-445b-8d90-f97d65684393","Type":"ContainerStarted","Data":"5617a46fcc75deeac98787be3c17cbfee033d1278ea3f59b8669020088dd8149"} Jan 21 15:44:06 crc kubenswrapper[4739]: I0121 15:44:06.446706 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:44:06 crc kubenswrapper[4739]: I0121 15:44:06.447364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" event={"ID":"a508acc2-8e44-462f-a06a-9ae09a853f5a","Type":"ContainerStarted","Data":"95c5538fad47f2ab7b7a96685eaed0ca8ae783523ade4630fdcb0e673d2dd0b8"} Jan 21 15:44:06 crc kubenswrapper[4739]: I0121 15:44:06.484476 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podStartSLOduration=3.903582421 podStartE2EDuration="1m17.484459071s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.976902735 +0000 UTC m=+1003.667608999" lastFinishedPulling="2026-01-21 15:44:05.557779385 +0000 UTC m=+1077.248485649" observedRunningTime="2026-01-21 15:44:06.480066212 +0000 UTC m=+1078.170772476" watchObservedRunningTime="2026-01-21 15:44:06.484459071 +0000 UTC m=+1078.175165335" Jan 21 15:44:06 crc kubenswrapper[4739]: I0121 15:44:06.485369 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podStartSLOduration=3.289102611 podStartE2EDuration="1m17.485361966s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:52.005739427 +0000 UTC m=+1003.696445691" lastFinishedPulling="2026-01-21 15:44:06.201998782 +0000 UTC m=+1077.892705046" observedRunningTime="2026-01-21 15:44:06.468424484 +0000 UTC m=+1078.159130748" watchObservedRunningTime="2026-01-21 15:44:06.485361966 +0000 UTC m=+1078.176068230" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.466728 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" event={"ID":"8b8f2c9e-6151-4006-922f-dabaa3a79ddd","Type":"ContainerStarted","Data":"501cc2bf0ab1b2fd68ba29cb7b120b825529b9982b852f8dc8b8bccabe19770e"} Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.467637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.483639 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podStartSLOduration=3.267499563 podStartE2EDuration="1m20.483625614s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.99660688 +0000 UTC m=+1003.687313144" lastFinishedPulling="2026-01-21 15:44:09.212732931 +0000 UTC m=+1080.903439195" observedRunningTime="2026-01-21 15:44:09.481200168 +0000 UTC m=+1081.171906432" watchObservedRunningTime="2026-01-21 15:44:09.483625614 +0000 UTC m=+1081.174331878" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.510314 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.781765 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.891605 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.925170 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:44:10 crc kubenswrapper[4739]: I0121 15:44:10.657353 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:44:10 crc kubenswrapper[4739]: I0121 15:44:10.725951 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:44:10 crc kubenswrapper[4739]: I0121 15:44:10.727489 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:44:13 crc kubenswrapper[4739]: I0121 15:44:13.492585 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" event={"ID":"ef6032ac-99cd-4ac4-899b-74a9e3b53949","Type":"ContainerStarted","Data":"5bb8f82c63ec28585a98b4ff49d367c63f87e79d4bd487a68847e6ccffd6fc8d"} Jan 21 15:44:13 crc kubenswrapper[4739]: I0121 15:44:13.493266 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:44:13 crc kubenswrapper[4739]: I0121 15:44:13.512721 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podStartSLOduration=56.109537903 podStartE2EDuration="1m24.512703154s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:43:44.006780483 +0000 UTC m=+1055.697486747" lastFinishedPulling="2026-01-21 15:44:12.409945734 +0000 UTC m=+1084.100651998" observedRunningTime="2026-01-21 15:44:13.512387684 +0000 UTC m=+1085.203093968" watchObservedRunningTime="2026-01-21 15:44:13.512703154 +0000 UTC m=+1085.203409418" Jan 21 15:44:15 crc kubenswrapper[4739]: I0121 15:44:15.928598 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:44:20 crc kubenswrapper[4739]: I0121 15:44:20.374081 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:44:20 crc kubenswrapper[4739]: I0121 15:44:20.415630 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:44:25 crc kubenswrapper[4739]: I0121 15:44:25.158314 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:44:35 crc kubenswrapper[4739]: I0121 15:44:35.223056 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:44:35 crc kubenswrapper[4739]: I0121 15:44:35.223581 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.424739 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.431054 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.434769 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.434990 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.436852 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wk8pg" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.437781 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.439027 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.511777 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb5wz\" (UniqueName: \"kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.512064 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.512554 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.513720 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.515557 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.543788 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.613119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb5wz\" (UniqueName: \"kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.613175 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f78hl\" (UniqueName: \"kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.613245 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.613279 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.613328 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.614060 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.637586 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb5wz\" (UniqueName: \"kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.714183 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f78hl\" (UniqueName: \"kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.714264 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.714291 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.715441 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.715538 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.736735 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f78hl\" (UniqueName: \"kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.753637 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.831084 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:43 crc kubenswrapper[4739]: I0121 15:44:43.201089 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:44:43 crc kubenswrapper[4739]: I0121 15:44:43.282479 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:44:43 crc kubenswrapper[4739]: W0121 15:44:43.285156 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31218b47_4223_44e7_a423_815983aa2ba6.slice/crio-fb00e50ce1fa525573dd1060d3faccab33b17911883ea5ae94a1708de6831df2 WatchSource:0}: Error finding container fb00e50ce1fa525573dd1060d3faccab33b17911883ea5ae94a1708de6831df2: Status 404 returned error can't find the container with id fb00e50ce1fa525573dd1060d3faccab33b17911883ea5ae94a1708de6831df2 Jan 21 15:44:43 crc kubenswrapper[4739]: I0121 15:44:43.692707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" event={"ID":"14b30814-219a-48df-850d-534d083bf646","Type":"ContainerStarted","Data":"c5b54fda8b9b8f36245f41caf21e22b565d757ef62ba54fa7f1b92e4cffb9021"} Jan 21 15:44:43 crc kubenswrapper[4739]: I0121 15:44:43.694974 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" event={"ID":"31218b47-4223-44e7-a423-815983aa2ba6","Type":"ContainerStarted","Data":"fb00e50ce1fa525573dd1060d3faccab33b17911883ea5ae94a1708de6831df2"} Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.300620 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.364175 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.365291 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.378552 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.476958 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.477033 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.477098 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-288pr\" (UniqueName: \"kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.578433 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-288pr\" (UniqueName: \"kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.578529 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.578564 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.579551 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.579561 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.601229 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-288pr\" (UniqueName: \"kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.663109 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.691301 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.693310 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.699006 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.717285 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.789517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.789560 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.789608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2v4c\" (UniqueName: \"kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.891625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2v4c\" (UniqueName: \"kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.891774 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.891809 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.893339 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.894029 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.943785 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2v4c\" (UniqueName: \"kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.031835 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.374375 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.509735 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.511128 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.513458 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.516982 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.517074 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.517271 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.517640 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-46fx7" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.517870 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.519713 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.525961 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613295 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613350 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613375 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613397 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwwl\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613428 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613451 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613471 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613666 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.714870 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.714944 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.714970 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715030 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715102 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715139 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715216 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715245 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715267 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715291 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pwwl\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.722446 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.722459 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.722920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.724457 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.726153 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.728275 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.733216 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.733339 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.735204 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.742400 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pwwl\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.756623 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.760745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.822112 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7856l" event={"ID":"a495d430-61bc-4fbd-89d2-8c9df8cd19f0","Type":"ContainerStarted","Data":"f0067986b5d3826703553f818907fbc91914e289f5f1cc54bb202229f6e2f3eb"} Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.848277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.855118 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.856463 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.862623 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.862967 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.863303 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.863393 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.863494 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hxngv" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.863734 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.868788 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.912654 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929544 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929570 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929617 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929638 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929657 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929702 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzd99\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929771 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030242 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030297 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030328 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030348 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030403 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030423 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030444 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzd99\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030499 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030532 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030909 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.032192 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.032562 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.040496 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.041458 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.042655 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.043082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.044386 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.048443 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.048545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.050289 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzd99\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.058362 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.152041 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.211617 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.534186 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.796047 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerStarted","Data":"4be9ccaff7f44b9922cb3a123f667b6b06795c76e8f74a176cda84687b755499"} Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.799509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" event={"ID":"4b5d2228-51e0-483b-9c8d-baba19b20fd5","Type":"ContainerStarted","Data":"f271834d8f4ea8d925ce34d625d0ace48b43d39d96de90042e012a2ac0c31487"} Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.828317 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.955335 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.957078 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.960212 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5d5ff" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.960912 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.961322 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.962628 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.964769 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.987298 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.069663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070428 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-kolla-config\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070565 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-default\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070606 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070688 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll9r2\" (UniqueName: \"kubernetes.io/projected/d9c86609-18a0-47cb-8ce3-863d829a2f65-kube-api-access-ll9r2\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070746 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070856 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174066 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-kolla-config\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174171 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-default\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174242 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll9r2\" (UniqueName: \"kubernetes.io/projected/d9c86609-18a0-47cb-8ce3-863d829a2f65-kube-api-access-ll9r2\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174304 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174327 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174785 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-kolla-config\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174797 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.175163 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.175434 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-default\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.180320 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.194329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.198389 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.201057 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.224439 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll9r2\" (UniqueName: \"kubernetes.io/projected/d9c86609-18a0-47cb-8ce3-863d829a2f65-kube-api-access-ll9r2\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.289634 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.603854 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.826130 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d9c86609-18a0-47cb-8ce3-863d829a2f65","Type":"ContainerStarted","Data":"fad662ad6e333b9ea3c95b5367d19ddbe9e2fe1708760bac84dbfed7c5455433"} Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.828844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerStarted","Data":"9b30f94b9f3236e39738165e3f009216fa8c05c9ae2f0cee84393829c2ab8b70"} Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.314327 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.319401 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.321357 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-d2kzn" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.323327 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.323343 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.323790 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.326362 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499530 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499614 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499641 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499666 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9lzs\" (UniqueName: \"kubernetes.io/projected/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kube-api-access-f9lzs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499761 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499900 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499999 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.593213 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.599634 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.600966 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.600994 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9lzs\" (UniqueName: \"kubernetes.io/projected/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kube-api-access-f9lzs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601052 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601099 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601122 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601191 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.602682 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.605604 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.605719 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.606857 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.608220 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.611548 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.615780 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.616562 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.616757 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6ntnw" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.616999 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.634063 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.646970 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.647771 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9lzs\" (UniqueName: \"kubernetes.io/projected/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kube-api-access-f9lzs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.681936 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.702425 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.702463 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-kolla-config\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.708128 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.708277 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-config-data\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.708374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4dv\" (UniqueName: \"kubernetes.io/projected/aa850895-9a18-4cff-83f8-bf7eea44559e-kube-api-access-8p4dv\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.810406 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.811691 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-kolla-config\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.811782 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.811848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-config-data\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.812261 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4dv\" (UniqueName: \"kubernetes.io/projected/aa850895-9a18-4cff-83f8-bf7eea44559e-kube-api-access-8p4dv\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.812621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-kolla-config\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.813270 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-config-data\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.819298 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.829795 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.835030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4dv\" (UniqueName: \"kubernetes.io/projected/aa850895-9a18-4cff-83f8-bf7eea44559e-kube-api-access-8p4dv\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:50 crc kubenswrapper[4739]: I0121 15:44:50.018287 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:44:50 crc kubenswrapper[4739]: I0121 15:44:50.351288 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:44:50 crc kubenswrapper[4739]: I0121 15:44:50.575625 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:44:50 crc kubenswrapper[4739]: W0121 15:44:50.586396 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa850895_9a18_4cff_83f8_bf7eea44559e.slice/crio-52a3254ff352f91b59e7b043616b3608c25a96c9d9bd8e60ea805c23424d4460 WatchSource:0}: Error finding container 52a3254ff352f91b59e7b043616b3608c25a96c9d9bd8e60ea805c23424d4460: Status 404 returned error can't find the container with id 52a3254ff352f91b59e7b043616b3608c25a96c9d9bd8e60ea805c23424d4460 Jan 21 15:44:50 crc kubenswrapper[4739]: I0121 15:44:50.873591 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"aa850895-9a18-4cff-83f8-bf7eea44559e","Type":"ContainerStarted","Data":"52a3254ff352f91b59e7b043616b3608c25a96c9d9bd8e60ea805c23424d4460"} Jan 21 15:44:50 crc kubenswrapper[4739]: I0121 15:44:50.880600 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d6502a4d-1f62-4f00-8c3f-7e51b14b616a","Type":"ContainerStarted","Data":"11ad9580e227682893c5331ef1b335cacf8b9b819a7592e7bc5d3f257489636c"} Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.036918 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.038126 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.040777 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-65xmb" Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.061897 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.139574 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k86x\" (UniqueName: \"kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x\") pod \"kube-state-metrics-0\" (UID: \"582ba37d-9e3e-4696-a70e-69e702c6f931\") " pod="openstack/kube-state-metrics-0" Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.241526 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k86x\" (UniqueName: \"kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x\") pod \"kube-state-metrics-0\" (UID: \"582ba37d-9e3e-4696-a70e-69e702c6f931\") " pod="openstack/kube-state-metrics-0" Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.276953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k86x\" (UniqueName: \"kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x\") pod \"kube-state-metrics-0\" (UID: \"582ba37d-9e3e-4696-a70e-69e702c6f931\") " pod="openstack/kube-state-metrics-0" Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.368453 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:44:53 crc kubenswrapper[4739]: I0121 15:44:53.938339 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.731488 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g28pm"] Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.734446 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.740554 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.740680 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nm8tb" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.743617 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.744507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g28pm"] Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.766540 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tl2z8"] Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.768577 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.801714 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tl2z8"] Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910222 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"582ba37d-9e3e-4696-a70e-69e702c6f931","Type":"ContainerStarted","Data":"61ece0ca2bec34a69b536ce6fa39aec53042c12094f4235644f0b42c3bd4677d"} Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910706 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30ab2564-7d97-4b59-8687-376b2e37fba0-scripts\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910766 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910803 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-lib\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910868 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910900 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-combined-ca-bundle\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-log\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-run\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911098 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-ovn-controller-tls-certs\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911118 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbmr\" (UniqueName: \"kubernetes.io/projected/30ab2564-7d97-4b59-8687-376b2e37fba0-kube-api-access-7zbmr\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911179 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/614c729f-eac4-4445-bfdd-750236431c69-scripts\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911249 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-etc-ovs\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvzw2\" (UniqueName: \"kubernetes.io/projected/614c729f-eac4-4445-bfdd-750236431c69-kube-api-access-fvzw2\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-log-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.012851 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-lib\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.012897 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.012923 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-combined-ca-bundle\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.012952 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-log\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.012984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-run\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013019 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-ovn-controller-tls-certs\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013038 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbmr\" (UniqueName: \"kubernetes.io/projected/30ab2564-7d97-4b59-8687-376b2e37fba0-kube-api-access-7zbmr\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013067 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/614c729f-eac4-4445-bfdd-750236431c69-scripts\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013103 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-etc-ovs\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013123 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvzw2\" (UniqueName: \"kubernetes.io/projected/614c729f-eac4-4445-bfdd-750236431c69-kube-api-access-fvzw2\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013143 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-log-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30ab2564-7d97-4b59-8687-376b2e37fba0-scripts\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013191 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013367 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-lib\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013450 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013509 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-etc-ovs\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013552 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-log\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013690 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-log-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.015343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/614c729f-eac4-4445-bfdd-750236431c69-scripts\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.015710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-run\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.016272 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30ab2564-7d97-4b59-8687-376b2e37fba0-scripts\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.025312 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-ovn-controller-tls-certs\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.032781 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbmr\" (UniqueName: \"kubernetes.io/projected/30ab2564-7d97-4b59-8687-376b2e37fba0-kube-api-access-7zbmr\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.035221 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-combined-ca-bundle\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.042619 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvzw2\" (UniqueName: \"kubernetes.io/projected/614c729f-eac4-4445-bfdd-750236431c69-kube-api-access-fvzw2\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.064590 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.082435 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.269454 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.272899 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.278690 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.278960 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.281522 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-n2mhx" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.281767 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.282123 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.291336 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420694 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420719 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420739 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420775 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420829 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-config\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420851 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420870 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcqxh\" (UniqueName: \"kubernetes.io/projected/3651185e-676d-492e-99cf-26ea8a5b9de6-kube-api-access-bcqxh\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.522431 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.522494 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523319 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.522514 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523673 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-config\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523780 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523802 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcqxh\" (UniqueName: \"kubernetes.io/projected/3651185e-676d-492e-99cf-26ea8a5b9de6-kube-api-access-bcqxh\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523942 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.524982 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.527551 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.536148 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.537585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.541627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcqxh\" (UniqueName: \"kubernetes.io/projected/3651185e-676d-492e-99cf-26ea8a5b9de6-kube-api-access-bcqxh\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.542373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-config\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.557162 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.564248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.609716 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.655299 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g28pm"] Jan 21 15:44:58 crc kubenswrapper[4739]: I0121 15:44:58.966168 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tl2z8"] Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.208478 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.210048 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.214857 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-l9w2m" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.217098 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.217194 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.217968 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.230007 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388118 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388175 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388216 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388238 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388412 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmch4\" (UniqueName: \"kubernetes.io/projected/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-kube-api-access-lmch4\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388525 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-config\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495637 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495683 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495773 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495801 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmch4\" (UniqueName: \"kubernetes.io/projected/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-kube-api-access-lmch4\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495896 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-config\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495930 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.496892 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.496969 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.500520 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-config\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.500993 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.503453 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.505569 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.518217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.524161 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.593652 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmch4\" (UniqueName: \"kubernetes.io/projected/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-kube-api-access-lmch4\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.834480 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.173429 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27"] Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.174507 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.177223 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.177422 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.208035 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27"] Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.251202 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5sdng"] Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.253019 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.257270 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.263223 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5sdng"] Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.312638 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.312707 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.312771 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csxq\" (UniqueName: \"kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovn-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csxq\" (UniqueName: \"kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414319 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-combined-ca-bundle\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414494 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bpzf\" (UniqueName: \"kubernetes.io/projected/d9e43d4c-0e56-42cb-9f23-e225a7451d52-kube-api-access-8bpzf\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414543 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e43d4c-0e56-42cb-9f23-e225a7451d52-config\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414593 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414694 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovs-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.415259 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.442709 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.476866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csxq\" (UniqueName: \"kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.510415 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.516810 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovs-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.516940 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovn-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.516984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-combined-ca-bundle\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517075 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bpzf\" (UniqueName: \"kubernetes.io/projected/d9e43d4c-0e56-42cb-9f23-e225a7451d52-kube-api-access-8bpzf\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e43d4c-0e56-42cb-9f23-e225a7451d52-config\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517148 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovs-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517148 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovn-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e43d4c-0e56-42cb-9f23-e225a7451d52-config\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.520689 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-combined-ca-bundle\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.536202 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bpzf\" (UniqueName: \"kubernetes.io/projected/d9e43d4c-0e56-42cb-9f23-e225a7451d52-kube-api-access-8bpzf\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.537992 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.577760 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:03 crc kubenswrapper[4739]: W0121 15:45:03.639055 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30ab2564_7d97_4b59_8687_376b2e37fba0.slice/crio-2c65e7371c77289f2cc9f3fd91aef082bb9883449705da10fec822376d84af42 WatchSource:0}: Error finding container 2c65e7371c77289f2cc9f3fd91aef082bb9883449705da10fec822376d84af42: Status 404 returned error can't find the container with id 2c65e7371c77289f2cc9f3fd91aef082bb9883449705da10fec822376d84af42 Jan 21 15:45:03 crc kubenswrapper[4739]: W0121 15:45:03.641236 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod614c729f_eac4_4445_bfdd_750236431c69.slice/crio-c620de4879c12602fbaa36818264b34b79b50316c3c68165b61a8f6311edd7eb WatchSource:0}: Error finding container c620de4879c12602fbaa36818264b34b79b50316c3c68165b61a8f6311edd7eb: Status 404 returned error can't find the container with id c620de4879c12602fbaa36818264b34b79b50316c3c68165b61a8f6311edd7eb Jan 21 15:45:03 crc kubenswrapper[4739]: I0121 15:45:03.979656 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tl2z8" event={"ID":"30ab2564-7d97-4b59-8687-376b2e37fba0","Type":"ContainerStarted","Data":"2c65e7371c77289f2cc9f3fd91aef082bb9883449705da10fec822376d84af42"} Jan 21 15:45:03 crc kubenswrapper[4739]: I0121 15:45:03.981098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm" event={"ID":"614c729f-eac4-4445-bfdd-750236431c69","Type":"ContainerStarted","Data":"c620de4879c12602fbaa36818264b34b79b50316c3c68165b61a8f6311edd7eb"} Jan 21 15:45:05 crc kubenswrapper[4739]: I0121 15:45:05.223474 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:45:05 crc kubenswrapper[4739]: I0121 15:45:05.223750 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:45:05 crc kubenswrapper[4739]: I0121 15:45:05.231183 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27"] Jan 21 15:45:15 crc kubenswrapper[4739]: E0121 15:45:15.421538 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 21 15:45:15 crc kubenswrapper[4739]: E0121 15:45:15.422904 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ll9r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(d9c86609-18a0-47cb-8ce3-863d829a2f65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:15 crc kubenswrapper[4739]: E0121 15:45:15.424074 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="d9c86609-18a0-47cb-8ce3-863d829a2f65" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.070654 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="d9c86609-18a0-47cb-8ce3-863d829a2f65" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.669670 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.669875 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nddhbbh5cdh5d7h67h5d4h58fh675h65dh584h55fh95h5b5h687h55bh5d8h577h67bh55fh59bh649h79h58bh554h56h7bh5b7h57fhf8h555h5d5h5fdq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8p4dv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(aa850895-9a18-4cff-83f8-bf7eea44559e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.671355 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="aa850895-9a18-4cff-83f8-bf7eea44559e" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.695838 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.696035 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9lzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(d6502a4d-1f62-4f00-8c3f-7e51b14b616a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.697389 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="d6502a4d-1f62-4f00-8c3f-7e51b14b616a" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.935922 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.936162 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(807cb521-8cc2-4f29-9ff4-7138d251a817): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.937372 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" Jan 21 15:45:17 crc kubenswrapper[4739]: I0121 15:45:17.076607 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" event={"ID":"1b5f7037-511d-4ca6-865c-c3a81e4b131d","Type":"ContainerStarted","Data":"4a19ce3924fb6141a8bbf06d5a29220aaafc1a89ddc69404e63b6149ac026b82"} Jan 21 15:45:17 crc kubenswrapper[4739]: E0121 15:45:17.343106 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" Jan 21 15:45:17 crc kubenswrapper[4739]: E0121 15:45:17.343454 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="d6502a4d-1f62-4f00-8c3f-7e51b14b616a" Jan 21 15:45:17 crc kubenswrapper[4739]: E0121 15:45:17.343524 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="aa850895-9a18-4cff-83f8-bf7eea44559e" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.862650 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7\": context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.863502 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dhbdh5fchc9h5dbh65bh59hb9h649h98hdfh65h9h8ch58dh599h54bh694h65bh66dh5bfh655h6bh95hbfh58fh64dh567h654h584hdfh57dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvzw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-g28pm_openstack(614c729f-eac4-4445-bfdd-750236431c69): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7\": context canceled" logger="UnhandledError" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.864723 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7\\\": context canceled\"" pod="openstack/ovn-controller-g28pm" podUID="614c729f-eac4-4445-bfdd-750236431c69" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.865484 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.865603 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb5wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8p86b_openstack(14b30814-219a-48df-850d-534d083bf646): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.866740 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" podUID="14b30814-219a-48df-850d-534d083bf646" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.916254 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.916465 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-288pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-7856l_openstack(a495d430-61bc-4fbd-89d2-8c9df8cd19f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.917997 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-7856l" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.020673 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.020933 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f78hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-j62wq_openstack(31218b47-4223-44e7-a423-815983aa2ba6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.022089 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" podUID="31218b47-4223-44e7-a423-815983aa2ba6" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.022374 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.022519 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2v4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-rlhvc_openstack(4b5d2228-51e0-483b-9c8d-baba19b20fd5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.023861 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.135708 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-g28pm" podUID="614c729f-eac4-4445-bfdd-750236431c69" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.135748 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-7856l" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.143130 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" Jan 21 15:45:25 crc kubenswrapper[4739]: I0121 15:45:25.174843 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:45:26 crc kubenswrapper[4739]: W0121 15:45:26.660233 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2126ac0e_f6f2_4bfb_b364_1ef544fb6d72.slice/crio-e0bb8958c353a05aad11c409ec584c3978dba433c2d12e0ab206b26ef99285ef WatchSource:0}: Error finding container e0bb8958c353a05aad11c409ec584c3978dba433c2d12e0ab206b26ef99285ef: Status 404 returned error can't find the container with id e0bb8958c353a05aad11c409ec584c3978dba433c2d12e0ab206b26ef99285ef Jan 21 15:45:26 crc kubenswrapper[4739]: E0121 15:45:26.668380 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Jan 21 15:45:26 crc kubenswrapper[4739]: E0121 15:45:26.671232 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dhbdh5fchc9h5dbh65bh59hb9h649h98hdfh65h9h8ch58dh599h54bh694h65bh66dh5bfh655h6bh95hbfh58fh64dh567h654h584hdfh57dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-tl2z8_openstack(30ab2564-7d97-4b59-8687-376b2e37fba0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:26 crc kubenswrapper[4739]: E0121 15:45:26.673160 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-tl2z8" podUID="30ab2564-7d97-4b59-8687-376b2e37fba0" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.742917 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.768929 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.790047 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f78hl\" (UniqueName: \"kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl\") pod \"31218b47-4223-44e7-a423-815983aa2ba6\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.790205 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config\") pod \"31218b47-4223-44e7-a423-815983aa2ba6\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.791034 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config" (OuterVolumeSpecName: "config") pod "31218b47-4223-44e7-a423-815983aa2ba6" (UID: "31218b47-4223-44e7-a423-815983aa2ba6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.792203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc\") pod \"31218b47-4223-44e7-a423-815983aa2ba6\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.792705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31218b47-4223-44e7-a423-815983aa2ba6" (UID: "31218b47-4223-44e7-a423-815983aa2ba6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.792986 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.793003 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.797340 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl" (OuterVolumeSpecName: "kube-api-access-f78hl") pod "31218b47-4223-44e7-a423-815983aa2ba6" (UID: "31218b47-4223-44e7-a423-815983aa2ba6"). InnerVolumeSpecName "kube-api-access-f78hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.894083 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb5wz\" (UniqueName: \"kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz\") pod \"14b30814-219a-48df-850d-534d083bf646\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.894453 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config\") pod \"14b30814-219a-48df-850d-534d083bf646\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.894801 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f78hl\" (UniqueName: \"kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.897587 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config" (OuterVolumeSpecName: "config") pod "14b30814-219a-48df-850d-534d083bf646" (UID: "14b30814-219a-48df-850d-534d083bf646"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.904897 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz" (OuterVolumeSpecName: "kube-api-access-mb5wz") pod "14b30814-219a-48df-850d-534d083bf646" (UID: "14b30814-219a-48df-850d-534d083bf646"). InnerVolumeSpecName "kube-api-access-mb5wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.996275 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.996316 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb5wz\" (UniqueName: \"kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.146462 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72","Type":"ContainerStarted","Data":"e0bb8958c353a05aad11c409ec584c3978dba433c2d12e0ab206b26ef99285ef"} Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.147288 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" event={"ID":"14b30814-219a-48df-850d-534d083bf646","Type":"ContainerDied","Data":"c5b54fda8b9b8f36245f41caf21e22b565d757ef62ba54fa7f1b92e4cffb9021"} Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.147313 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.148314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" event={"ID":"31218b47-4223-44e7-a423-815983aa2ba6","Type":"ContainerDied","Data":"fb00e50ce1fa525573dd1060d3faccab33b17911883ea5ae94a1708de6831df2"} Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.148346 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:45:27 crc kubenswrapper[4739]: E0121 15:45:27.150260 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-tl2z8" podUID="30ab2564-7d97-4b59-8687-376b2e37fba0" Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.209837 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.218466 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.256849 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.269181 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.372903 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.702595 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5sdng"] Jan 21 15:45:28 crc kubenswrapper[4739]: I0121 15:45:28.792137 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b30814-219a-48df-850d-534d083bf646" path="/var/lib/kubelet/pods/14b30814-219a-48df-850d-534d083bf646/volumes" Jan 21 15:45:28 crc kubenswrapper[4739]: I0121 15:45:28.793499 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31218b47-4223-44e7-a423-815983aa2ba6" path="/var/lib/kubelet/pods/31218b47-4223-44e7-a423-815983aa2ba6/volumes" Jan 21 15:45:29 crc kubenswrapper[4739]: E0121 15:45:29.917986 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 15:45:29 crc kubenswrapper[4739]: E0121 15:45:29.918325 4739 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 15:45:29 crc kubenswrapper[4739]: E0121 15:45:29.918496 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4k86x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(582ba37d-9e3e-4696-a70e-69e702c6f931): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 21 15:45:29 crc kubenswrapper[4739]: E0121 15:45:29.919902 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" Jan 21 15:45:30 crc kubenswrapper[4739]: I0121 15:45:30.169896 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5sdng" event={"ID":"d9e43d4c-0e56-42cb-9f23-e225a7451d52","Type":"ContainerStarted","Data":"e29ab5186aa57bce0aa90b2400110021af96b5971be00b6b042fc090f367562d"} Jan 21 15:45:30 crc kubenswrapper[4739]: I0121 15:45:30.171430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3651185e-676d-492e-99cf-26ea8a5b9de6","Type":"ContainerStarted","Data":"42fc9da92168f5a1468de2b50184ece5d3691a5c665152c432bb2156b71c8a5c"} Jan 21 15:45:30 crc kubenswrapper[4739]: E0121 15:45:30.173323 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" Jan 21 15:45:31 crc kubenswrapper[4739]: I0121 15:45:31.179633 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b5f7037-511d-4ca6-865c-c3a81e4b131d" containerID="95a324e11e4765d006e5026537dcc33be4f21fe30cdf53e6c98bbebdf2786f6c" exitCode=0 Jan 21 15:45:31 crc kubenswrapper[4739]: I0121 15:45:31.179805 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" event={"ID":"1b5f7037-511d-4ca6-865c-c3a81e4b131d","Type":"ContainerDied","Data":"95a324e11e4765d006e5026537dcc33be4f21fe30cdf53e6c98bbebdf2786f6c"} Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.192802 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerStarted","Data":"f0dcb2eebe67208fcdb9e5d6e76eb2a8fc12f52316acc2632f85a265d4e75d72"} Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.195372 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerStarted","Data":"beb9d8f271dffc70001cef409f13acc1edb8c7262a616123e00e54bfff24ac6b"} Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.679724 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.787797 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume\") pod \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.788698 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume\") pod \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.788898 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6csxq\" (UniqueName: \"kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq\") pod \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.789514 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume" (OuterVolumeSpecName: "config-volume") pod "1b5f7037-511d-4ca6-865c-c3a81e4b131d" (UID: "1b5f7037-511d-4ca6-865c-c3a81e4b131d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.790635 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.793966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1b5f7037-511d-4ca6-865c-c3a81e4b131d" (UID: "1b5f7037-511d-4ca6-865c-c3a81e4b131d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.794500 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq" (OuterVolumeSpecName: "kube-api-access-6csxq") pod "1b5f7037-511d-4ca6-865c-c3a81e4b131d" (UID: "1b5f7037-511d-4ca6-865c-c3a81e4b131d"). InnerVolumeSpecName "kube-api-access-6csxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.895273 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.895317 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6csxq\" (UniqueName: \"kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:33 crc kubenswrapper[4739]: I0121 15:45:33.218698 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d9c86609-18a0-47cb-8ce3-863d829a2f65","Type":"ContainerStarted","Data":"a3403ddf6a0b33bc6f848a3f6a1ec140c688ebc0a1d203f88224f994e10315bc"} Jan 21 15:45:33 crc kubenswrapper[4739]: I0121 15:45:33.225780 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" event={"ID":"1b5f7037-511d-4ca6-865c-c3a81e4b131d","Type":"ContainerDied","Data":"4a19ce3924fb6141a8bbf06d5a29220aaafc1a89ddc69404e63b6149ac026b82"} Jan 21 15:45:33 crc kubenswrapper[4739]: I0121 15:45:33.225838 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a19ce3924fb6141a8bbf06d5a29220aaafc1a89ddc69404e63b6149ac026b82" Jan 21 15:45:33 crc kubenswrapper[4739]: I0121 15:45:33.225841 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:35 crc kubenswrapper[4739]: I0121 15:45:35.222605 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:45:35 crc kubenswrapper[4739]: I0121 15:45:35.223229 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:45:35 crc kubenswrapper[4739]: I0121 15:45:35.223278 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:45:35 crc kubenswrapper[4739]: I0121 15:45:35.223956 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:45:35 crc kubenswrapper[4739]: I0121 15:45:35.224002 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c" gracePeriod=600 Jan 21 15:45:37 crc kubenswrapper[4739]: I0121 15:45:37.255837 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c" exitCode=0 Jan 21 15:45:37 crc kubenswrapper[4739]: I0121 15:45:37.255881 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c"} Jan 21 15:45:37 crc kubenswrapper[4739]: I0121 15:45:37.255997 4739 scope.go:117] "RemoveContainer" containerID="c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29" Jan 21 15:45:45 crc kubenswrapper[4739]: E0121 15:45:45.585212 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 15:45:45 crc kubenswrapper[4739]: E0121 15:45:45.585945 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n75h598h94h567h554h65bh55h68h664h67ch5d8h698hch68h546h5ch64dh679h8h5chch5b7h65fh5c4h74h677h5ddh5f8h598h555h5dch5f5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bpzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-5sdng_openstack(d9e43d4c-0e56-42cb-9f23-e225a7451d52): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:45 crc kubenswrapper[4739]: E0121 15:45:45.587260 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-5sdng" podUID="d9e43d4c-0e56-42cb-9f23-e225a7451d52" Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.316994 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3651185e-676d-492e-99cf-26ea8a5b9de6","Type":"ContainerStarted","Data":"bf8bf80cc61f65e98f97c753d41f6a6cc6904caf706de25e672381118ad6b3db"} Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.319514 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"aa850895-9a18-4cff-83f8-bf7eea44559e","Type":"ContainerStarted","Data":"cf3bcb99718cd1172c6f69d1bc2866b1e5cb54703687bc5e65e9420221124368"} Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.319756 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.333754 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72","Type":"ContainerStarted","Data":"fdadee6f544ebf52e50cbb9c53bf1004186aad05731f1ae21418e1e92a827ebf"} Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.338109 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.505000083 podStartE2EDuration="57.338088604s" podCreationTimestamp="2026-01-21 15:44:49 +0000 UTC" firstStartedPulling="2026-01-21 15:44:50.591344529 +0000 UTC m=+1122.282050793" lastFinishedPulling="2026-01-21 15:45:45.42443305 +0000 UTC m=+1177.115139314" observedRunningTime="2026-01-21 15:45:46.33537864 +0000 UTC m=+1178.026084904" watchObservedRunningTime="2026-01-21 15:45:46.338088604 +0000 UTC m=+1178.028794868" Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.342978 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d6502a4d-1f62-4f00-8c3f-7e51b14b616a","Type":"ContainerStarted","Data":"da56ebd582a70bd383758e0766efdd68baa335461edb3da6e0241b488149aa63"} Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.347568 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4"} Jan 21 15:45:46 crc kubenswrapper[4739]: E0121 15:45:46.348951 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovn-controller-metrics-5sdng" podUID="d9e43d4c-0e56-42cb-9f23-e225a7451d52" Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.355411 4739 generic.go:334] "Generic (PLEG): container finished" podID="30ab2564-7d97-4b59-8687-376b2e37fba0" containerID="37ed54a6d6a1519f7b30b70537a874832fc4b93d045bb2f0ac86000fb227f7df" exitCode=0 Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.355526 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tl2z8" event={"ID":"30ab2564-7d97-4b59-8687-376b2e37fba0","Type":"ContainerDied","Data":"37ed54a6d6a1519f7b30b70537a874832fc4b93d045bb2f0ac86000fb227f7df"} Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.358085 4739 generic.go:334] "Generic (PLEG): container finished" podID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerID="d6b7fba63174d0b8e38bf700d7b8958b452ed9f0c4af6f8600e3f3ae6bae56da" exitCode=0 Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.358148 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7856l" event={"ID":"a495d430-61bc-4fbd-89d2-8c9df8cd19f0","Type":"ContainerDied","Data":"d6b7fba63174d0b8e38bf700d7b8958b452ed9f0c4af6f8600e3f3ae6bae56da"} Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.362198 4739 generic.go:334] "Generic (PLEG): container finished" podID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerID="08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7" exitCode=0 Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.362240 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" event={"ID":"4b5d2228-51e0-483b-9c8d-baba19b20fd5","Type":"ContainerDied","Data":"08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.371251 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm" event={"ID":"614c729f-eac4-4445-bfdd-750236431c69","Type":"ContainerStarted","Data":"f19e07b1df0253b8d0c724c99d54101fa4bcfa59d38815390ccda1f070847333"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.372020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-g28pm" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.373518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7856l" event={"ID":"a495d430-61bc-4fbd-89d2-8c9df8cd19f0","Type":"ContainerStarted","Data":"321f34b2b5954872fb50f3855a5bd4b6dbf74f42f2a03ed4d65c0b3c0c9d3868"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.373732 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.384389 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" event={"ID":"4b5d2228-51e0-483b-9c8d-baba19b20fd5","Type":"ContainerStarted","Data":"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.385180 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.391966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"582ba37d-9e3e-4696-a70e-69e702c6f931","Type":"ContainerStarted","Data":"e444fc0aa8d4387b17fa5ef680ddd69e93b254caba9e8f75545bfd7fb1aa1b31"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.392623 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.393654 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-g28pm" podStartSLOduration=11.394987557 podStartE2EDuration="54.393634299s" podCreationTimestamp="2026-01-21 15:44:54 +0000 UTC" firstStartedPulling="2026-01-21 15:45:03.651100437 +0000 UTC m=+1135.341806701" lastFinishedPulling="2026-01-21 15:45:46.649747179 +0000 UTC m=+1178.340453443" observedRunningTime="2026-01-21 15:45:48.390254057 +0000 UTC m=+1180.080960321" watchObservedRunningTime="2026-01-21 15:45:48.393634299 +0000 UTC m=+1180.084340563" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.399449 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72","Type":"ContainerStarted","Data":"296f26ac9134e0d0e10920a37848880abb3cf26e9fca068223f52be28d43ae37"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.401754 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3651185e-676d-492e-99cf-26ea8a5b9de6","Type":"ContainerStarted","Data":"f6b7fe252515d40b2624186bf4239ba612c2ffcb318fa0967f18778994c55013"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.406478 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tl2z8" event={"ID":"30ab2564-7d97-4b59-8687-376b2e37fba0","Type":"ContainerStarted","Data":"d05c876c71c1e406126733d7897dfdab622a103b3f3c9e55275430434d6ad395"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.413747 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-7856l" podStartSLOduration=3.402372446 podStartE2EDuration="1m3.413730127s" podCreationTimestamp="2026-01-21 15:44:45 +0000 UTC" firstStartedPulling="2026-01-21 15:44:46.391244654 +0000 UTC m=+1118.081950918" lastFinishedPulling="2026-01-21 15:45:46.402602335 +0000 UTC m=+1178.093308599" observedRunningTime="2026-01-21 15:45:48.408580187 +0000 UTC m=+1180.099286451" watchObservedRunningTime="2026-01-21 15:45:48.413730127 +0000 UTC m=+1180.104436391" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.432615 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" podStartSLOduration=3.9295357380000002 podStartE2EDuration="1m3.432594293s" podCreationTimestamp="2026-01-21 15:44:45 +0000 UTC" firstStartedPulling="2026-01-21 15:44:46.903879698 +0000 UTC m=+1118.594585962" lastFinishedPulling="2026-01-21 15:45:46.406938253 +0000 UTC m=+1178.097644517" observedRunningTime="2026-01-21 15:45:48.430009262 +0000 UTC m=+1180.120715536" watchObservedRunningTime="2026-01-21 15:45:48.432594293 +0000 UTC m=+1180.123300557" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.453555 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.580804227 podStartE2EDuration="57.453531943s" podCreationTimestamp="2026-01-21 15:44:51 +0000 UTC" firstStartedPulling="2026-01-21 15:44:53.970129782 +0000 UTC m=+1125.660836046" lastFinishedPulling="2026-01-21 15:45:47.842857488 +0000 UTC m=+1179.533563762" observedRunningTime="2026-01-21 15:45:48.451004155 +0000 UTC m=+1180.141710419" watchObservedRunningTime="2026-01-21 15:45:48.453531943 +0000 UTC m=+1180.144238207" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.475655 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=30.485188205 podStartE2EDuration="50.475635997s" podCreationTimestamp="2026-01-21 15:44:58 +0000 UTC" firstStartedPulling="2026-01-21 15:45:26.662464163 +0000 UTC m=+1158.353170427" lastFinishedPulling="2026-01-21 15:45:46.652911945 +0000 UTC m=+1178.343618219" observedRunningTime="2026-01-21 15:45:48.473749576 +0000 UTC m=+1180.164455860" watchObservedRunningTime="2026-01-21 15:45:48.475635997 +0000 UTC m=+1180.166342261" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.500242 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=37.087829298 podStartE2EDuration="54.500220788s" podCreationTimestamp="2026-01-21 15:44:54 +0000 UTC" firstStartedPulling="2026-01-21 15:45:29.221066515 +0000 UTC m=+1160.911772779" lastFinishedPulling="2026-01-21 15:45:46.633458005 +0000 UTC m=+1178.324164269" observedRunningTime="2026-01-21 15:45:48.493429913 +0000 UTC m=+1180.184136187" watchObservedRunningTime="2026-01-21 15:45:48.500220788 +0000 UTC m=+1180.190927052" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.415122 4739 generic.go:334] "Generic (PLEG): container finished" podID="d6502a4d-1f62-4f00-8c3f-7e51b14b616a" containerID="da56ebd582a70bd383758e0766efdd68baa335461edb3da6e0241b488149aa63" exitCode=0 Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.415215 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d6502a4d-1f62-4f00-8c3f-7e51b14b616a","Type":"ContainerDied","Data":"da56ebd582a70bd383758e0766efdd68baa335461edb3da6e0241b488149aa63"} Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.417834 4739 generic.go:334] "Generic (PLEG): container finished" podID="d9c86609-18a0-47cb-8ce3-863d829a2f65" containerID="a3403ddf6a0b33bc6f848a3f6a1ec140c688ebc0a1d203f88224f994e10315bc" exitCode=0 Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.417898 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d9c86609-18a0-47cb-8ce3-863d829a2f65","Type":"ContainerDied","Data":"a3403ddf6a0b33bc6f848a3f6a1ec140c688ebc0a1d203f88224f994e10315bc"} Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.422191 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tl2z8" event={"ID":"30ab2564-7d97-4b59-8687-376b2e37fba0","Type":"ContainerStarted","Data":"e2ace69b2d50f500f5f458a05f0587865fe0b8b3e4ab89b1d85a9d78007d62d5"} Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.423411 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.423760 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.471952 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tl2z8" podStartSLOduration=12.717377026 podStartE2EDuration="55.471936156s" podCreationTimestamp="2026-01-21 15:44:54 +0000 UTC" firstStartedPulling="2026-01-21 15:45:03.652477265 +0000 UTC m=+1135.343183519" lastFinishedPulling="2026-01-21 15:45:46.407036375 +0000 UTC m=+1178.097742649" observedRunningTime="2026-01-21 15:45:49.465031177 +0000 UTC m=+1181.155737441" watchObservedRunningTime="2026-01-21 15:45:49.471936156 +0000 UTC m=+1181.162642420" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.610733 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.653870 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.835032 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.022605 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.432947 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d6502a4d-1f62-4f00-8c3f-7e51b14b616a","Type":"ContainerStarted","Data":"7af49a53ab815c14ca4049e056d32b4e93d8fb1ce69749176e87adaffa08390f"} Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.435574 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d9c86609-18a0-47cb-8ce3-863d829a2f65","Type":"ContainerStarted","Data":"a2d20ad34486c4cbec547098067ffe20502c7dea9e4781d7daef0b1a77cb8f1b"} Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.435874 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.482900 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.487866 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.057396238 podStartE2EDuration="1m4.487840049s" podCreationTimestamp="2026-01-21 15:44:46 +0000 UTC" firstStartedPulling="2026-01-21 15:44:48.607579174 +0000 UTC m=+1120.298285438" lastFinishedPulling="2026-01-21 15:45:32.038022985 +0000 UTC m=+1163.728729249" observedRunningTime="2026-01-21 15:45:50.484635272 +0000 UTC m=+1182.175341546" watchObservedRunningTime="2026-01-21 15:45:50.487840049 +0000 UTC m=+1182.178546313" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.492512 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.43087838 podStartE2EDuration="1m2.492489597s" podCreationTimestamp="2026-01-21 15:44:48 +0000 UTC" firstStartedPulling="2026-01-21 15:44:50.373659239 +0000 UTC m=+1122.064365493" lastFinishedPulling="2026-01-21 15:45:45.435270436 +0000 UTC m=+1177.125976710" observedRunningTime="2026-01-21 15:45:50.463580067 +0000 UTC m=+1182.154286341" watchObservedRunningTime="2026-01-21 15:45:50.492489597 +0000 UTC m=+1182.183195861" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.793586 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.794142 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="dnsmasq-dns" containerID="cri-o://812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526" gracePeriod=10 Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.822055 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:45:50 crc kubenswrapper[4739]: E0121 15:45:50.822403 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b5f7037-511d-4ca6-865c-c3a81e4b131d" containerName="collect-profiles" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.822419 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b5f7037-511d-4ca6-865c-c3a81e4b131d" containerName="collect-profiles" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.822557 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b5f7037-511d-4ca6-865c-c3a81e4b131d" containerName="collect-profiles" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.823375 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.825633 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.834655 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.849345 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.881465 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.918775 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w45d5\" (UniqueName: \"kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.918856 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.918910 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.919001 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.019968 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w45d5\" (UniqueName: \"kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.020314 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.020376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.020435 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.021628 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.022249 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.022959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.047745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w45d5\" (UniqueName: \"kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.144226 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.265845 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.325371 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc\") pod \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.325588 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config\") pod \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.325617 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2v4c\" (UniqueName: \"kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c\") pod \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.338269 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c" (OuterVolumeSpecName: "kube-api-access-x2v4c") pod "4b5d2228-51e0-483b-9c8d-baba19b20fd5" (UID: "4b5d2228-51e0-483b-9c8d-baba19b20fd5"). InnerVolumeSpecName "kube-api-access-x2v4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.376450 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b5d2228-51e0-483b-9c8d-baba19b20fd5" (UID: "4b5d2228-51e0-483b-9c8d-baba19b20fd5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.392186 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config" (OuterVolumeSpecName: "config") pod "4b5d2228-51e0-483b-9c8d-baba19b20fd5" (UID: "4b5d2228-51e0-483b-9c8d-baba19b20fd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.427348 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.427390 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2v4c\" (UniqueName: \"kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.427409 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.443205 4739 generic.go:334] "Generic (PLEG): container finished" podID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerID="812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526" exitCode=0 Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.443258 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" event={"ID":"4b5d2228-51e0-483b-9c8d-baba19b20fd5","Type":"ContainerDied","Data":"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526"} Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.443285 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.443308 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" event={"ID":"4b5d2228-51e0-483b-9c8d-baba19b20fd5","Type":"ContainerDied","Data":"f271834d8f4ea8d925ce34d625d0ace48b43d39d96de90042e012a2ac0c31487"} Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.443329 4739 scope.go:117] "RemoveContainer" containerID="812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.465806 4739 scope.go:117] "RemoveContainer" containerID="08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.477189 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.483157 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.492492 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.499689 4739 scope.go:117] "RemoveContainer" containerID="812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526" Jan 21 15:45:51 crc kubenswrapper[4739]: E0121 15:45:51.500659 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526\": container with ID starting with 812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526 not found: ID does not exist" containerID="812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.500697 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526"} err="failed to get container status \"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526\": rpc error: code = NotFound desc = could not find container \"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526\": container with ID starting with 812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526 not found: ID does not exist" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.500726 4739 scope.go:117] "RemoveContainer" containerID="08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7" Jan 21 15:45:51 crc kubenswrapper[4739]: E0121 15:45:51.501384 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7\": container with ID starting with 08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7 not found: ID does not exist" containerID="08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.501466 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7"} err="failed to get container status \"08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7\": rpc error: code = NotFound desc = could not find container \"08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7\": container with ID starting with 08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7 not found: ID does not exist" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.593182 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:45:51 crc kubenswrapper[4739]: W0121 15:45:51.601722 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e4ca37a_22c8_43e6_8c86_d78dad0f516f.slice/crio-7ad92c7664924cceae623c3df22609f6b3c89632a1fb3f8ee9ce4bea3c3d2835 WatchSource:0}: Error finding container 7ad92c7664924cceae623c3df22609f6b3c89632a1fb3f8ee9ce4bea3c3d2835: Status 404 returned error can't find the container with id 7ad92c7664924cceae623c3df22609f6b3c89632a1fb3f8ee9ce4bea3c3d2835 Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.767257 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.767794 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-7856l" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="dnsmasq-dns" containerID="cri-o://321f34b2b5954872fb50f3855a5bd4b6dbf74f42f2a03ed4d65c0b3c0c9d3868" gracePeriod=10 Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.817683 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:45:51 crc kubenswrapper[4739]: E0121 15:45:51.818086 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="dnsmasq-dns" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.818109 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="dnsmasq-dns" Jan 21 15:45:51 crc kubenswrapper[4739]: E0121 15:45:51.818184 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="init" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.818195 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="init" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.818357 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="dnsmasq-dns" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.819403 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.823794 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.834434 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.834483 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lz49\" (UniqueName: \"kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.834504 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.834521 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.834557 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.906806 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.937458 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lz49\" (UniqueName: \"kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.937526 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.937558 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.937596 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.937738 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.938701 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.946730 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.960062 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.962454 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.010583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lz49\" (UniqueName: \"kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.047863 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.049075 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.054226 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.054465 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.054611 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.059213 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.065016 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2hs44" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.138202 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143703 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143755 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143792 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-scripts\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143844 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-config\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143898 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w46m8\" (UniqueName: \"kubernetes.io/projected/3600d295-3864-446c-a407-b1b80c2a2c50-kube-api-access-w46m8\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143952 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.246195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.247343 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.247565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.248170 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.248295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.248441 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-scripts\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.249712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-config\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.250119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w46m8\" (UniqueName: \"kubernetes.io/projected/3600d295-3864-446c-a407-b1b80c2a2c50-kube-api-access-w46m8\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.249858 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-scripts\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.251183 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-config\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.263674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.263868 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.264326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.278612 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w46m8\" (UniqueName: \"kubernetes.io/projected/3600d295-3864-446c-a407-b1b80c2a2c50-kube-api-access-w46m8\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.418753 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.462677 4739 generic.go:334] "Generic (PLEG): container finished" podID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerID="321f34b2b5954872fb50f3855a5bd4b6dbf74f42f2a03ed4d65c0b3c0c9d3868" exitCode=0 Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.462765 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7856l" event={"ID":"a495d430-61bc-4fbd-89d2-8c9df8cd19f0","Type":"ContainerDied","Data":"321f34b2b5954872fb50f3855a5bd4b6dbf74f42f2a03ed4d65c0b3c0c9d3868"} Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.467075 4739 generic.go:334] "Generic (PLEG): container finished" podID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerID="084a242c1d8d9415224413d4e88fc1c69ebb51da7373364f30e62f37023e9a02" exitCode=0 Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.467860 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" event={"ID":"3e4ca37a-22c8-43e6-8c86-d78dad0f516f","Type":"ContainerDied","Data":"084a242c1d8d9415224413d4e88fc1c69ebb51da7373364f30e62f37023e9a02"} Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.467931 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" event={"ID":"3e4ca37a-22c8-43e6-8c86-d78dad0f516f","Type":"ContainerStarted","Data":"7ad92c7664924cceae623c3df22609f6b3c89632a1fb3f8ee9ce4bea3c3d2835"} Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.729122 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:45:52 crc kubenswrapper[4739]: W0121 15:45:52.743305 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f37975f_9bd3_4ae2_af25_af5f12096d34.slice/crio-f3866bd1987850b814a71cc9f4ffd263e91998c5ef115699f5edf4496b25b256 WatchSource:0}: Error finding container f3866bd1987850b814a71cc9f4ffd263e91998c5ef115699f5edf4496b25b256: Status 404 returned error can't find the container with id f3866bd1987850b814a71cc9f4ffd263e91998c5ef115699f5edf4496b25b256 Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.804012 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" path="/var/lib/kubelet/pods/4b5d2228-51e0-483b-9c8d-baba19b20fd5/volumes" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.965724 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:45:53 crc kubenswrapper[4739]: W0121 15:45:53.029980 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3600d295_3864_446c_a407_b1b80c2a2c50.slice/crio-3ebc0928e570b314bc46cd53d74d3e7c44c4e56fced74b724169d1ff335fad7b WatchSource:0}: Error finding container 3ebc0928e570b314bc46cd53d74d3e7c44c4e56fced74b724169d1ff335fad7b: Status 404 returned error can't find the container with id 3ebc0928e570b314bc46cd53d74d3e7c44c4e56fced74b724169d1ff335fad7b Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.056310 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.170130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-288pr\" (UniqueName: \"kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr\") pod \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.170240 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config\") pod \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.170299 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc\") pod \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.175578 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr" (OuterVolumeSpecName: "kube-api-access-288pr") pod "a495d430-61bc-4fbd-89d2-8c9df8cd19f0" (UID: "a495d430-61bc-4fbd-89d2-8c9df8cd19f0"). InnerVolumeSpecName "kube-api-access-288pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.216275 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a495d430-61bc-4fbd-89d2-8c9df8cd19f0" (UID: "a495d430-61bc-4fbd-89d2-8c9df8cd19f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.217237 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config" (OuterVolumeSpecName: "config") pod "a495d430-61bc-4fbd-89d2-8c9df8cd19f0" (UID: "a495d430-61bc-4fbd-89d2-8c9df8cd19f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.271730 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-288pr\" (UniqueName: \"kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.271761 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.271773 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.478092 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3600d295-3864-446c-a407-b1b80c2a2c50","Type":"ContainerStarted","Data":"3ebc0928e570b314bc46cd53d74d3e7c44c4e56fced74b724169d1ff335fad7b"} Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.482009 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.482008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7856l" event={"ID":"a495d430-61bc-4fbd-89d2-8c9df8cd19f0","Type":"ContainerDied","Data":"f0067986b5d3826703553f818907fbc91914e289f5f1cc54bb202229f6e2f3eb"} Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.482154 4739 scope.go:117] "RemoveContainer" containerID="321f34b2b5954872fb50f3855a5bd4b6dbf74f42f2a03ed4d65c0b3c0c9d3868" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.484526 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" event={"ID":"3e4ca37a-22c8-43e6-8c86-d78dad0f516f","Type":"ContainerStarted","Data":"646907a7fa39e8448e6057534b5da15d33fdd5359168e7cfb2cd4a084b4c0810"} Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.485381 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.487297 4739 generic.go:334] "Generic (PLEG): container finished" podID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerID="e91e79ee3fa6d87120f0261dc55689054264d41e3602ead19857a8d28c0bf427" exitCode=0 Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.488296 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-64gmb" event={"ID":"5f37975f-9bd3-4ae2-af25-af5f12096d34","Type":"ContainerDied","Data":"e91e79ee3fa6d87120f0261dc55689054264d41e3602ead19857a8d28c0bf427"} Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.488317 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-64gmb" event={"ID":"5f37975f-9bd3-4ae2-af25-af5f12096d34","Type":"ContainerStarted","Data":"f3866bd1987850b814a71cc9f4ffd263e91998c5ef115699f5edf4496b25b256"} Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.514349 4739 scope.go:117] "RemoveContainer" containerID="d6b7fba63174d0b8e38bf700d7b8958b452ed9f0c4af6f8600e3f3ae6bae56da" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.551431 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" podStartSLOduration=3.551404683 podStartE2EDuration="3.551404683s" podCreationTimestamp="2026-01-21 15:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:45:53.538427059 +0000 UTC m=+1185.229133323" watchObservedRunningTime="2026-01-21 15:45:53.551404683 +0000 UTC m=+1185.242110947" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.574591 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.586162 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:45:54 crc kubenswrapper[4739]: I0121 15:45:54.501209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-64gmb" event={"ID":"5f37975f-9bd3-4ae2-af25-af5f12096d34","Type":"ContainerStarted","Data":"e88af91d76411e4a9d0f66185bd59b8144edcc60ec5e589ac5146b2d5830e5c7"} Jan 21 15:45:54 crc kubenswrapper[4739]: I0121 15:45:54.501629 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:54 crc kubenswrapper[4739]: I0121 15:45:54.525058 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-64gmb" podStartSLOduration=3.525040024 podStartE2EDuration="3.525040024s" podCreationTimestamp="2026-01-21 15:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:45:54.523046439 +0000 UTC m=+1186.213752703" watchObservedRunningTime="2026-01-21 15:45:54.525040024 +0000 UTC m=+1186.215746288" Jan 21 15:45:54 crc kubenswrapper[4739]: I0121 15:45:54.793782 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" path="/var/lib/kubelet/pods/a495d430-61bc-4fbd-89d2-8c9df8cd19f0/volumes" Jan 21 15:45:55 crc kubenswrapper[4739]: I0121 15:45:55.514006 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3600d295-3864-446c-a407-b1b80c2a2c50","Type":"ContainerStarted","Data":"83938b054ebe6108c84926d2d38a037e842892ddba97940e368926ca6c241832"} Jan 21 15:45:55 crc kubenswrapper[4739]: I0121 15:45:55.515245 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3600d295-3864-446c-a407-b1b80c2a2c50","Type":"ContainerStarted","Data":"f69bda5b0e11e1dca559d07cfbfe0affa3cb6483b21ced4a3e7ca090c94fc004"} Jan 21 15:45:55 crc kubenswrapper[4739]: I0121 15:45:55.538830 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.120126084 podStartE2EDuration="4.538792129s" podCreationTimestamp="2026-01-21 15:45:51 +0000 UTC" firstStartedPulling="2026-01-21 15:45:53.03788304 +0000 UTC m=+1184.728589304" lastFinishedPulling="2026-01-21 15:45:54.456549085 +0000 UTC m=+1186.147255349" observedRunningTime="2026-01-21 15:45:55.533435883 +0000 UTC m=+1187.224142147" watchObservedRunningTime="2026-01-21 15:45:55.538792129 +0000 UTC m=+1187.229498393" Jan 21 15:45:56 crc kubenswrapper[4739]: I0121 15:45:56.521201 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 15:45:58 crc kubenswrapper[4739]: I0121 15:45:58.290465 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 15:45:58 crc kubenswrapper[4739]: I0121 15:45:58.290764 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 15:45:58 crc kubenswrapper[4739]: I0121 15:45:58.402754 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 15:45:58 crc kubenswrapper[4739]: I0121 15:45:58.597307 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.582186 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8255-account-create-update-2tksx"] Jan 21 15:45:59 crc kubenswrapper[4739]: E0121 15:45:59.582570 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="init" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.582584 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="init" Jan 21 15:45:59 crc kubenswrapper[4739]: E0121 15:45:59.582623 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="dnsmasq-dns" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.582631 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="dnsmasq-dns" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.582842 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="dnsmasq-dns" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.591965 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.594836 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8255-account-create-update-2tksx"] Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.601391 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.606349 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-d45dw"] Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.607371 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.625673 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d45dw"] Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.671551 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2wm\" (UniqueName: \"kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.671632 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.671714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scrnv\" (UniqueName: \"kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.671754 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.683207 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.683268 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.750532 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.773320 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj2wm\" (UniqueName: \"kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.773386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.773426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scrnv\" (UniqueName: \"kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.773487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.775294 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.775433 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.798019 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scrnv\" (UniqueName: \"kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.798771 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj2wm\" (UniqueName: \"kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.929723 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.946363 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d45dw" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.001207 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-bbwz7"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.002476 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.018596 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bbwz7"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.077200 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z874\" (UniqueName: \"kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.077278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.112659 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-abc8-account-create-update-fm7tf"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.115425 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.131409 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.160487 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-abc8-account-create-update-fm7tf"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.179728 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b49bw\" (UniqueName: \"kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.180059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z874\" (UniqueName: \"kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.180104 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.180141 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.185430 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.197705 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-56sxt"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.199199 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.216129 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-56sxt"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.220972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z874\" (UniqueName: \"kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.282163 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.282366 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnb5n\" (UniqueName: \"kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.282509 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b49bw\" (UniqueName: \"kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.282618 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.283645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.303966 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9f59-account-create-update-7sbc4"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.307796 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.310804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.319222 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b49bw\" (UniqueName: \"kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.328378 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9f59-account-create-update-7sbc4"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.383605 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wdzl\" (UniqueName: \"kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.383787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.383940 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnb5n\" (UniqueName: \"kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.383999 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.384788 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.404830 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnb5n\" (UniqueName: \"kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.439686 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.453538 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.485695 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.485770 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wdzl\" (UniqueName: \"kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.486959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.503422 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wdzl\" (UniqueName: \"kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.522970 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.605632 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8255-account-create-update-2tksx"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.615851 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d45dw"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.632304 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.734709 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.938272 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-56sxt"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.947376 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bbwz7"] Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.011627 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-abc8-account-create-update-fm7tf"] Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.146330 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.302240 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9f59-account-create-update-7sbc4"] Jan 21 15:46:01 crc kubenswrapper[4739]: W0121 15:46:01.313870 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dc4447d_5821_489f_942f_ce925194a473.slice/crio-b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260 WatchSource:0}: Error finding container b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260: Status 404 returned error can't find the container with id b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260 Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.444000 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.585637 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-56sxt" event={"ID":"612cd690-e4aa-49df-862b-3484cc15bac0","Type":"ContainerStarted","Data":"d25ea23442deaabe93f613a4d4a3fe3d8530dfa48aad449bc93768e15ff9cf77"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.587690 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9f59-account-create-update-7sbc4" event={"ID":"9dc4447d-5821-489f-942f-ce925194a473","Type":"ContainerStarted","Data":"b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.588790 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d45dw" event={"ID":"2fb43d43-ff94-49b3-9b9c-6db46b040c95","Type":"ContainerStarted","Data":"69bbc72339bbacc7b33f68f62048c9b54f583064dd972b87290360453415a70e"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.589679 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-abc8-account-create-update-fm7tf" event={"ID":"93643236-1032-4392-8463-f9e48dc2ae84","Type":"ContainerStarted","Data":"d1c77b59b99790272bac2af41ed78f5311b274cffda1c8f03ea98bdaa570faa7"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.590601 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bbwz7" event={"ID":"236f8c92-05a6-4512-a96e-61babb7c44e6","Type":"ContainerStarted","Data":"b30f497c71a292cc4ada4fe36a9f1b40ef6b44becea820513b991f7d9fd7388a"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.591863 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8255-account-create-update-2tksx" event={"ID":"9a2b900b-3c0d-4958-ba5b-627101c68acb","Type":"ContainerStarted","Data":"92ad25f64af551e1916f184b9f02d4fe9167b8fddc62416eeef99fc0a60f2b23"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.591889 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8255-account-create-update-2tksx" event={"ID":"9a2b900b-3c0d-4958-ba5b-627101c68acb","Type":"ContainerStarted","Data":"9c6cc9f43c3d88cd1024e88f469ed604f12cb7d94ce68e99c8cd8f4cb221cb44"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.140044 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.222992 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.223295 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="dnsmasq-dns" containerID="cri-o://646907a7fa39e8448e6057534b5da15d33fdd5359168e7cfb2cd4a084b4c0810" gracePeriod=10 Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.601847 4739 generic.go:334] "Generic (PLEG): container finished" podID="612cd690-e4aa-49df-862b-3484cc15bac0" containerID="1243f86ee15a1aeee0d4b18e428ad0cfefd41c45c84c4000ee8aaf929ddd0e6f" exitCode=0 Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.602360 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-56sxt" event={"ID":"612cd690-e4aa-49df-862b-3484cc15bac0","Type":"ContainerDied","Data":"1243f86ee15a1aeee0d4b18e428ad0cfefd41c45c84c4000ee8aaf929ddd0e6f"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.607185 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9f59-account-create-update-7sbc4" event={"ID":"9dc4447d-5821-489f-942f-ce925194a473","Type":"ContainerStarted","Data":"592715eb0a04dfcc49c6ce19c56c1dfafe0e681ba65a4d5737645200e7d3a0bb"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.613365 4739 generic.go:334] "Generic (PLEG): container finished" podID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerID="646907a7fa39e8448e6057534b5da15d33fdd5359168e7cfb2cd4a084b4c0810" exitCode=0 Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.613449 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" event={"ID":"3e4ca37a-22c8-43e6-8c86-d78dad0f516f","Type":"ContainerDied","Data":"646907a7fa39e8448e6057534b5da15d33fdd5359168e7cfb2cd4a084b4c0810"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.615073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d45dw" event={"ID":"2fb43d43-ff94-49b3-9b9c-6db46b040c95","Type":"ContainerStarted","Data":"a8e9caf6e39196ec92a014427023de95e142cf4850d65e3ee7098c515370b27b"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.619796 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5sdng" event={"ID":"d9e43d4c-0e56-42cb-9f23-e225a7451d52","Type":"ContainerStarted","Data":"b3e0071acf354d27b765baf071892894f87a224279b484a619ade242b4d447be"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.628548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-abc8-account-create-update-fm7tf" event={"ID":"93643236-1032-4392-8463-f9e48dc2ae84","Type":"ContainerStarted","Data":"f3cf97ad8ac4ce1bd48d9acd7e646dcf11cea945a9fccb97ce93590e4fa2034e"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.641389 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bbwz7" event={"ID":"236f8c92-05a6-4512-a96e-61babb7c44e6","Type":"ContainerStarted","Data":"92d68e17dbcf0c2849e6ce7e96ab8fa463a4b8c4cf1cc86bf449fd641b8b3d1f"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.662368 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-abc8-account-create-update-fm7tf" podStartSLOduration=2.662349549 podStartE2EDuration="2.662349549s" podCreationTimestamp="2026-01-21 15:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.654943177 +0000 UTC m=+1194.345649461" watchObservedRunningTime="2026-01-21 15:46:02.662349549 +0000 UTC m=+1194.353055813" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.674710 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-9f59-account-create-update-7sbc4" podStartSLOduration=2.674692135 podStartE2EDuration="2.674692135s" podCreationTimestamp="2026-01-21 15:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.672396033 +0000 UTC m=+1194.363102317" watchObservedRunningTime="2026-01-21 15:46:02.674692135 +0000 UTC m=+1194.365398409" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.697387 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-d45dw" podStartSLOduration=3.697365655 podStartE2EDuration="3.697365655s" podCreationTimestamp="2026-01-21 15:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.696518691 +0000 UTC m=+1194.387224955" watchObservedRunningTime="2026-01-21 15:46:02.697365655 +0000 UTC m=+1194.388071919" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.729495 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5sdng" podStartSLOduration=-9223371974.125303 podStartE2EDuration="1m2.729471941s" podCreationTimestamp="2026-01-21 15:45:00 +0000 UTC" firstStartedPulling="2026-01-21 15:45:29.226211835 +0000 UTC m=+1160.916918109" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.717516144 +0000 UTC m=+1194.408222418" watchObservedRunningTime="2026-01-21 15:46:02.729471941 +0000 UTC m=+1194.420178205" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.747691 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8255-account-create-update-2tksx" podStartSLOduration=3.747671528 podStartE2EDuration="3.747671528s" podCreationTimestamp="2026-01-21 15:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.744792558 +0000 UTC m=+1194.435498822" watchObservedRunningTime="2026-01-21 15:46:02.747671528 +0000 UTC m=+1194.438377792" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.770776 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-bbwz7" podStartSLOduration=3.770757297 podStartE2EDuration="3.770757297s" podCreationTimestamp="2026-01-21 15:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.76721417 +0000 UTC m=+1194.457920434" watchObservedRunningTime="2026-01-21 15:46:02.770757297 +0000 UTC m=+1194.461463561" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.798764 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.934953 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc\") pod \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.935088 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w45d5\" (UniqueName: \"kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5\") pod \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.935203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb\") pod \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.935265 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config\") pod \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.947217 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5" (OuterVolumeSpecName: "kube-api-access-w45d5") pod "3e4ca37a-22c8-43e6-8c86-d78dad0f516f" (UID: "3e4ca37a-22c8-43e6-8c86-d78dad0f516f"). InnerVolumeSpecName "kube-api-access-w45d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.983296 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e4ca37a-22c8-43e6-8c86-d78dad0f516f" (UID: "3e4ca37a-22c8-43e6-8c86-d78dad0f516f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.986386 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config" (OuterVolumeSpecName: "config") pod "3e4ca37a-22c8-43e6-8c86-d78dad0f516f" (UID: "3e4ca37a-22c8-43e6-8c86-d78dad0f516f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.988744 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e4ca37a-22c8-43e6-8c86-d78dad0f516f" (UID: "3e4ca37a-22c8-43e6-8c86-d78dad0f516f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.037910 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.037972 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w45d5\" (UniqueName: \"kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.037994 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.038009 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.650641 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.650962 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" event={"ID":"3e4ca37a-22c8-43e6-8c86-d78dad0f516f","Type":"ContainerDied","Data":"7ad92c7664924cceae623c3df22609f6b3c89632a1fb3f8ee9ce4bea3c3d2835"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.651015 4739 scope.go:117] "RemoveContainer" containerID="646907a7fa39e8448e6057534b5da15d33fdd5359168e7cfb2cd4a084b4c0810" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.654080 4739 generic.go:334] "Generic (PLEG): container finished" podID="2fb43d43-ff94-49b3-9b9c-6db46b040c95" containerID="a8e9caf6e39196ec92a014427023de95e142cf4850d65e3ee7098c515370b27b" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.654153 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d45dw" event={"ID":"2fb43d43-ff94-49b3-9b9c-6db46b040c95","Type":"ContainerDied","Data":"a8e9caf6e39196ec92a014427023de95e142cf4850d65e3ee7098c515370b27b"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.659044 4739 generic.go:334] "Generic (PLEG): container finished" podID="93643236-1032-4392-8463-f9e48dc2ae84" containerID="f3cf97ad8ac4ce1bd48d9acd7e646dcf11cea945a9fccb97ce93590e4fa2034e" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.659120 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-abc8-account-create-update-fm7tf" event={"ID":"93643236-1032-4392-8463-f9e48dc2ae84","Type":"ContainerDied","Data":"f3cf97ad8ac4ce1bd48d9acd7e646dcf11cea945a9fccb97ce93590e4fa2034e"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.665477 4739 generic.go:334] "Generic (PLEG): container finished" podID="236f8c92-05a6-4512-a96e-61babb7c44e6" containerID="92d68e17dbcf0c2849e6ce7e96ab8fa463a4b8c4cf1cc86bf449fd641b8b3d1f" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.665632 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bbwz7" event={"ID":"236f8c92-05a6-4512-a96e-61babb7c44e6","Type":"ContainerDied","Data":"92d68e17dbcf0c2849e6ce7e96ab8fa463a4b8c4cf1cc86bf449fd641b8b3d1f"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.673085 4739 generic.go:334] "Generic (PLEG): container finished" podID="9a2b900b-3c0d-4958-ba5b-627101c68acb" containerID="92ad25f64af551e1916f184b9f02d4fe9167b8fddc62416eeef99fc0a60f2b23" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.673435 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8255-account-create-update-2tksx" event={"ID":"9a2b900b-3c0d-4958-ba5b-627101c68acb","Type":"ContainerDied","Data":"92ad25f64af551e1916f184b9f02d4fe9167b8fddc62416eeef99fc0a60f2b23"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.680798 4739 generic.go:334] "Generic (PLEG): container finished" podID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerID="beb9d8f271dffc70001cef409f13acc1edb8c7262a616123e00e54bfff24ac6b" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.680889 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerDied","Data":"beb9d8f271dffc70001cef409f13acc1edb8c7262a616123e00e54bfff24ac6b"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.686837 4739 generic.go:334] "Generic (PLEG): container finished" podID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerID="f0dcb2eebe67208fcdb9e5d6e76eb2a8fc12f52316acc2632f85a265d4e75d72" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.687071 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerDied","Data":"f0dcb2eebe67208fcdb9e5d6e76eb2a8fc12f52316acc2632f85a265d4e75d72"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.693799 4739 generic.go:334] "Generic (PLEG): container finished" podID="9dc4447d-5821-489f-942f-ce925194a473" containerID="592715eb0a04dfcc49c6ce19c56c1dfafe0e681ba65a4d5737645200e7d3a0bb" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.694038 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9f59-account-create-update-7sbc4" event={"ID":"9dc4447d-5821-489f-942f-ce925194a473","Type":"ContainerDied","Data":"592715eb0a04dfcc49c6ce19c56c1dfafe0e681ba65a4d5737645200e7d3a0bb"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.716996 4739 scope.go:117] "RemoveContainer" containerID="084a242c1d8d9415224413d4e88fc1c69ebb51da7373364f30e62f37023e9a02" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.815656 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.829484 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.015838 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-56sxt" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.158868 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts\") pod \"612cd690-e4aa-49df-862b-3484cc15bac0\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.159502 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "612cd690-e4aa-49df-862b-3484cc15bac0" (UID: "612cd690-e4aa-49df-862b-3484cc15bac0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.159599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnb5n\" (UniqueName: \"kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n\") pod \"612cd690-e4aa-49df-862b-3484cc15bac0\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.160034 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.182015 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n" (OuterVolumeSpecName: "kube-api-access-mnb5n") pod "612cd690-e4aa-49df-862b-3484cc15bac0" (UID: "612cd690-e4aa-49df-862b-3484cc15bac0"). InnerVolumeSpecName "kube-api-access-mnb5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.261207 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnb5n\" (UniqueName: \"kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.703106 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-56sxt" event={"ID":"612cd690-e4aa-49df-862b-3484cc15bac0","Type":"ContainerDied","Data":"d25ea23442deaabe93f613a4d4a3fe3d8530dfa48aad449bc93768e15ff9cf77"} Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.703140 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d25ea23442deaabe93f613a4d4a3fe3d8530dfa48aad449bc93768e15ff9cf77" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.703188 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-56sxt" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.709165 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerStarted","Data":"aed28c31b2ae94e515277652ec493ccaa087e7eb617da4c14f60d2c4b1f04775"} Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.710338 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.713508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerStarted","Data":"0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714"} Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.739168 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371957.115627 podStartE2EDuration="1m19.739148914s" podCreationTimestamp="2026-01-21 15:44:45 +0000 UTC" firstStartedPulling="2026-01-21 15:44:47.568998084 +0000 UTC m=+1119.259704348" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:04.736744618 +0000 UTC m=+1196.427450892" watchObservedRunningTime="2026-01-21 15:46:04.739148914 +0000 UTC m=+1196.429855178" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.781230 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.419420858 podStartE2EDuration="1m19.781202202s" podCreationTimestamp="2026-01-21 15:44:45 +0000 UTC" firstStartedPulling="2026-01-21 15:44:47.838372891 +0000 UTC m=+1119.529079155" lastFinishedPulling="2026-01-21 15:45:29.200154235 +0000 UTC m=+1160.890860499" observedRunningTime="2026-01-21 15:46:04.770964733 +0000 UTC m=+1196.461670997" watchObservedRunningTime="2026-01-21 15:46:04.781202202 +0000 UTC m=+1196.471908476" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.797129 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" path="/var/lib/kubelet/pods/3e4ca37a-22c8-43e6-8c86-d78dad0f516f/volumes" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.268361 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.389014 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wdzl\" (UniqueName: \"kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl\") pod \"9dc4447d-5821-489f-942f-ce925194a473\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.389289 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts\") pod \"9dc4447d-5821-489f-942f-ce925194a473\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.389961 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9dc4447d-5821-489f-942f-ce925194a473" (UID: "9dc4447d-5821-489f-942f-ce925194a473"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.395416 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl" (OuterVolumeSpecName: "kube-api-access-9wdzl") pod "9dc4447d-5821-489f-942f-ce925194a473" (UID: "9dc4447d-5821-489f-942f-ce925194a473"). InnerVolumeSpecName "kube-api-access-9wdzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.491517 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wdzl\" (UniqueName: \"kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.491550 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.498184 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.506732 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d45dw" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.514879 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.522067 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.592798 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scrnv\" (UniqueName: \"kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv\") pod \"9a2b900b-3c0d-4958-ba5b-627101c68acb\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.592954 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj2wm\" (UniqueName: \"kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm\") pod \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593038 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts\") pod \"236f8c92-05a6-4512-a96e-61babb7c44e6\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593113 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z874\" (UniqueName: \"kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874\") pod \"236f8c92-05a6-4512-a96e-61babb7c44e6\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593162 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts\") pod \"9a2b900b-3c0d-4958-ba5b-627101c68acb\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593232 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts\") pod \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593279 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts\") pod \"93643236-1032-4392-8463-f9e48dc2ae84\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593336 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b49bw\" (UniqueName: \"kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw\") pod \"93643236-1032-4392-8463-f9e48dc2ae84\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.594326 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2fb43d43-ff94-49b3-9b9c-6db46b040c95" (UID: "2fb43d43-ff94-49b3-9b9c-6db46b040c95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.594381 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a2b900b-3c0d-4958-ba5b-627101c68acb" (UID: "9a2b900b-3c0d-4958-ba5b-627101c68acb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.594993 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "236f8c92-05a6-4512-a96e-61babb7c44e6" (UID: "236f8c92-05a6-4512-a96e-61babb7c44e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.595255 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93643236-1032-4392-8463-f9e48dc2ae84" (UID: "93643236-1032-4392-8463-f9e48dc2ae84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.595430 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.595460 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.595473 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.600023 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw" (OuterVolumeSpecName: "kube-api-access-b49bw") pod "93643236-1032-4392-8463-f9e48dc2ae84" (UID: "93643236-1032-4392-8463-f9e48dc2ae84"). InnerVolumeSpecName "kube-api-access-b49bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.600133 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm" (OuterVolumeSpecName: "kube-api-access-sj2wm") pod "2fb43d43-ff94-49b3-9b9c-6db46b040c95" (UID: "2fb43d43-ff94-49b3-9b9c-6db46b040c95"). InnerVolumeSpecName "kube-api-access-sj2wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.602033 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874" (OuterVolumeSpecName: "kube-api-access-7z874") pod "236f8c92-05a6-4512-a96e-61babb7c44e6" (UID: "236f8c92-05a6-4512-a96e-61babb7c44e6"). InnerVolumeSpecName "kube-api-access-7z874". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.602528 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv" (OuterVolumeSpecName: "kube-api-access-scrnv") pod "9a2b900b-3c0d-4958-ba5b-627101c68acb" (UID: "9a2b900b-3c0d-4958-ba5b-627101c68acb"). InnerVolumeSpecName "kube-api-access-scrnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.697004 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z874\" (UniqueName: \"kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.697048 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.697063 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b49bw\" (UniqueName: \"kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.697074 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scrnv\" (UniqueName: \"kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.697088 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj2wm\" (UniqueName: \"kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.725091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-abc8-account-create-update-fm7tf" event={"ID":"93643236-1032-4392-8463-f9e48dc2ae84","Type":"ContainerDied","Data":"d1c77b59b99790272bac2af41ed78f5311b274cffda1c8f03ea98bdaa570faa7"} Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.725130 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c77b59b99790272bac2af41ed78f5311b274cffda1c8f03ea98bdaa570faa7" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.725179 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.728880 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bbwz7" event={"ID":"236f8c92-05a6-4512-a96e-61babb7c44e6","Type":"ContainerDied","Data":"b30f497c71a292cc4ada4fe36a9f1b40ef6b44becea820513b991f7d9fd7388a"} Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.729012 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b30f497c71a292cc4ada4fe36a9f1b40ef6b44becea820513b991f7d9fd7388a" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.729113 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.737693 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.737716 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8255-account-create-update-2tksx" event={"ID":"9a2b900b-3c0d-4958-ba5b-627101c68acb","Type":"ContainerDied","Data":"9c6cc9f43c3d88cd1024e88f469ed604f12cb7d94ce68e99c8cd8f4cb221cb44"} Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.737745 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6cc9f43c3d88cd1024e88f469ed604f12cb7d94ce68e99c8cd8f4cb221cb44" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.739969 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9f59-account-create-update-7sbc4" event={"ID":"9dc4447d-5821-489f-942f-ce925194a473","Type":"ContainerDied","Data":"b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260"} Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.739993 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.740008 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.743809 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d45dw" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.743878 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d45dw" event={"ID":"2fb43d43-ff94-49b3-9b9c-6db46b040c95","Type":"ContainerDied","Data":"69bbc72339bbacc7b33f68f62048c9b54f583064dd972b87290360453415a70e"} Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.744081 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69bbc72339bbacc7b33f68f62048c9b54f583064dd972b87290360453415a70e" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.887928 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lk9zp"] Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888230 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2b900b-3c0d-4958-ba5b-627101c68acb" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888242 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2b900b-3c0d-4958-ba5b-627101c68acb" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888256 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="init" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888262 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="init" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888272 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612cd690-e4aa-49df-862b-3484cc15bac0" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888278 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="612cd690-e4aa-49df-862b-3484cc15bac0" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888286 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="dnsmasq-dns" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888292 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="dnsmasq-dns" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888307 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236f8c92-05a6-4512-a96e-61babb7c44e6" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888314 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="236f8c92-05a6-4512-a96e-61babb7c44e6" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888328 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc4447d-5821-489f-942f-ce925194a473" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888334 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc4447d-5821-489f-942f-ce925194a473" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888343 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93643236-1032-4392-8463-f9e48dc2ae84" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888348 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="93643236-1032-4392-8463-f9e48dc2ae84" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888358 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb43d43-ff94-49b3-9b9c-6db46b040c95" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888364 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb43d43-ff94-49b3-9b9c-6db46b040c95" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888508 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="93643236-1032-4392-8463-f9e48dc2ae84" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888522 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="236f8c92-05a6-4512-a96e-61babb7c44e6" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888530 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="dnsmasq-dns" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888541 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="612cd690-e4aa-49df-862b-3484cc15bac0" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888550 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb43d43-ff94-49b3-9b9c-6db46b040c95" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888558 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc4447d-5821-489f-942f-ce925194a473" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888567 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2b900b-3c0d-4958-ba5b-627101c68acb" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.889060 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.895722 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.901666 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lk9zp"] Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.018587 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.018678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph6cs\" (UniqueName: \"kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.120779 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph6cs\" (UniqueName: \"kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.120982 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.121917 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.142137 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph6cs\" (UniqueName: \"kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.203692 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.211844 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.473283 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.655025 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lk9zp"] Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.761425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lk9zp" event={"ID":"60868a94-fd3e-46df-b77c-465afd0eb767","Type":"ContainerStarted","Data":"39e2ca11fa03410362ea272bd97368d626b8b47c529d24c794ae77cb8e5ca5b8"} Jan 21 15:46:08 crc kubenswrapper[4739]: I0121 15:46:08.770522 4739 generic.go:334] "Generic (PLEG): container finished" podID="60868a94-fd3e-46df-b77c-465afd0eb767" containerID="67ede1f57e10de2b54ce862f290642acfd3930e7dcfa913153ce81d6cf99c84b" exitCode=0 Jan 21 15:46:08 crc kubenswrapper[4739]: I0121 15:46:08.770645 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lk9zp" event={"ID":"60868a94-fd3e-46df-b77c-465afd0eb767","Type":"ContainerDied","Data":"67ede1f57e10de2b54ce862f290642acfd3930e7dcfa913153ce81d6cf99c84b"} Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.111434 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.181259 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts\") pod \"60868a94-fd3e-46df-b77c-465afd0eb767\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.181352 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph6cs\" (UniqueName: \"kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs\") pod \"60868a94-fd3e-46df-b77c-465afd0eb767\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.182129 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60868a94-fd3e-46df-b77c-465afd0eb767" (UID: "60868a94-fd3e-46df-b77c-465afd0eb767"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.201500 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs" (OuterVolumeSpecName: "kube-api-access-ph6cs") pod "60868a94-fd3e-46df-b77c-465afd0eb767" (UID: "60868a94-fd3e-46df-b77c-465afd0eb767"). InnerVolumeSpecName "kube-api-access-ph6cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.283388 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph6cs\" (UniqueName: \"kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.283498 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.475796 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jp27h"] Jan 21 15:46:10 crc kubenswrapper[4739]: E0121 15:46:10.476189 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60868a94-fd3e-46df-b77c-465afd0eb767" containerName="mariadb-account-create-update" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.476212 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="60868a94-fd3e-46df-b77c-465afd0eb767" containerName="mariadb-account-create-update" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.476434 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="60868a94-fd3e-46df-b77c-465afd0eb767" containerName="mariadb-account-create-update" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.477113 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.480573 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.480856 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lc9pg" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.489610 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jp27h"] Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.592384 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgjt\" (UniqueName: \"kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.592459 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.592558 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.592631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.695235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgjt\" (UniqueName: \"kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.695294 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.695335 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.695379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.699146 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.699934 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.700587 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.715614 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgjt\" (UniqueName: \"kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.799537 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.804204 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lk9zp" event={"ID":"60868a94-fd3e-46df-b77c-465afd0eb767","Type":"ContainerDied","Data":"39e2ca11fa03410362ea272bd97368d626b8b47c529d24c794ae77cb8e5ca5b8"} Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.804243 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e2ca11fa03410362ea272bd97368d626b8b47c529d24c794ae77cb8e5ca5b8" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.804320 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:11 crc kubenswrapper[4739]: I0121 15:46:11.363978 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jp27h"] Jan 21 15:46:11 crc kubenswrapper[4739]: I0121 15:46:11.811479 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jp27h" event={"ID":"1f3d6499-baea-49df-8dab-393a192e0a6b","Type":"ContainerStarted","Data":"8d6af15680b028b7196d3337964dfd8f37e30a87e1e0f88af059752880f60d5c"} Jan 21 15:46:13 crc kubenswrapper[4739]: I0121 15:46:13.304117 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lk9zp"] Jan 21 15:46:13 crc kubenswrapper[4739]: I0121 15:46:13.312711 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lk9zp"] Jan 21 15:46:14 crc kubenswrapper[4739]: I0121 15:46:14.793610 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60868a94-fd3e-46df-b77c-465afd0eb767" path="/var/lib/kubelet/pods/60868a94-fd3e-46df-b77c-465afd0eb767/volumes" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.157149 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.215007 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.693043 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5xglw"] Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.695122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.805631 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5xglw"] Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.823449 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hr5n6"] Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.823602 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8w8x\" (UniqueName: \"kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.823679 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.825507 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.869086 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hr5n6"] Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.925484 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.926079 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf92z\" (UniqueName: \"kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.926328 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8w8x\" (UniqueName: \"kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.926561 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.926887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.958494 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8w8x\" (UniqueName: \"kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.009692 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-70e6-account-create-update-k6c57"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.010672 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.013849 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.016213 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-70e6-account-create-update-k6c57"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.027966 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.028037 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf92z\" (UniqueName: \"kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.028783 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.042490 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.086518 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf92z\" (UniqueName: \"kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.102722 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lnjht"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.104541 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.129740 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gnv\" (UniqueName: \"kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.129806 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.142161 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e253-account-create-update-h4rrg"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.143419 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.154648 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.160546 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.160910 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lnjht"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.204981 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e253-account-create-update-h4rrg"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.231653 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.231950 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.232046 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42gnv\" (UniqueName: \"kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.232165 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ptpz\" (UniqueName: \"kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.232247 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.232321 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcphh\" (UniqueName: \"kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.233326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.272484 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42gnv\" (UniqueName: \"kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.343226 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ptpz\" (UniqueName: \"kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.343273 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcphh\" (UniqueName: \"kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.343354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.343384 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.344368 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.345870 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.347435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.354576 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kldms"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.355523 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.359354 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.360024 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.361260 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.361538 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p8xc6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.371428 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kldms"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.376378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ptpz\" (UniqueName: \"kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.432328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcphh\" (UniqueName: \"kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.451014 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp42z\" (UniqueName: \"kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.451134 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.451461 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.491434 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-965e-account-create-update-plfg9"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.492940 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.495486 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.498353 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.505534 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-965e-account-create-update-plfg9"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.506802 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.516688 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lwrxr"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.519126 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.523262 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.541370 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lwrxr"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.552956 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553020 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkqfm\" (UniqueName: \"kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp42z\" (UniqueName: \"kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553104 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553176 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553208 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553249 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndc2t\" (UniqueName: \"kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.560090 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.567084 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.578503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp42z\" (UniqueName: \"kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.655899 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.655968 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndc2t\" (UniqueName: \"kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.656018 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkqfm\" (UniqueName: \"kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.656055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.656841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.657316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.694212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkqfm\" (UniqueName: \"kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.706519 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndc2t\" (UniqueName: \"kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.755347 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.840296 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.853773 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.909984 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5xglw"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.949207 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5xglw" event={"ID":"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0","Type":"ContainerStarted","Data":"07c454e3f29da56cb6d1a292d6686cba1cee36ad9a1795adaabcb7016367e8f6"} Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.969366 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hr5n6" event={"ID":"b8a0eafc-020a-44b3-a392-6b8eea12109e","Type":"ContainerStarted","Data":"ad8fd799a937282f521d8ebb6b6ca14e2d67cbc425c5f236a89fb4400f445dfc"} Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.998988 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hr5n6"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.265174 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-70e6-account-create-update-k6c57"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.389490 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lnjht"] Jan 21 15:46:19 crc kubenswrapper[4739]: W0121 15:46:19.408661 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f5e4610_5432_4990_9e2b_a2d084e8316f.slice/crio-fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0 WatchSource:0}: Error finding container fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0: Status 404 returned error can't find the container with id fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0 Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.451708 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e253-account-create-update-h4rrg"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.735625 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kldms"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.833393 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lwrxr"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.839892 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-965e-account-create-update-plfg9"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.978390 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnjht" event={"ID":"5f5e4610-5432-4990-9e2b-a2d084e8316f","Type":"ContainerStarted","Data":"fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0"} Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.980093 4739 generic.go:334] "Generic (PLEG): container finished" podID="3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" containerID="310490a298abeace1cf59d9fd171eb1de98117d19a8e395d35525e477ff44eec" exitCode=0 Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.980171 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5xglw" event={"ID":"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0","Type":"ContainerDied","Data":"310490a298abeace1cf59d9fd171eb1de98117d19a8e395d35525e477ff44eec"} Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.983148 4739 generic.go:334] "Generic (PLEG): container finished" podID="b8a0eafc-020a-44b3-a392-6b8eea12109e" containerID="f1e666a054433ebfa0b65d3e054fd70294ddc2c1c1618fe385559dc99c64e8ff" exitCode=0 Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.983235 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hr5n6" event={"ID":"b8a0eafc-020a-44b3-a392-6b8eea12109e","Type":"ContainerDied","Data":"f1e666a054433ebfa0b65d3e054fd70294ddc2c1c1618fe385559dc99c64e8ff"} Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.985054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70e6-account-create-update-k6c57" event={"ID":"c8da5917-a0c7-4e03-b13a-5d3af63e49bd","Type":"ContainerStarted","Data":"ce49abdf77aa797d6c92f537a94ec8d2d9cf907c3c3ab08afab79bb008fd5d6a"} Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.985106 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70e6-account-create-update-k6c57" event={"ID":"c8da5917-a0c7-4e03-b13a-5d3af63e49bd","Type":"ContainerStarted","Data":"9809a73f2e63224e5b6ab5e829acc6a6c9b325dd6488ecbbb9400e468a7145dc"} Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.030928 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-70e6-account-create-update-k6c57" podStartSLOduration=3.030913702 podStartE2EDuration="3.030913702s" podCreationTimestamp="2026-01-21 15:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:20.027773617 +0000 UTC m=+1211.718479881" watchObservedRunningTime="2026-01-21 15:46:20.030913702 +0000 UTC m=+1211.721619966" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.118987 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g28pm" podUID="614c729f-eac4-4445-bfdd-750236431c69" containerName="ovn-controller" probeResult="failure" output=< Jan 21 15:46:20 crc kubenswrapper[4739]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 15:46:20 crc kubenswrapper[4739]: > Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.126998 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.134218 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.403248 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g28pm-config-wthq5"] Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.404767 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.409185 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.413515 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g28pm-config-wthq5"] Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.512978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g28gk\" (UniqueName: \"kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.513018 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.513041 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.513157 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.513200 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.513278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616450 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g28gk\" (UniqueName: \"kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616501 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616533 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616585 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616608 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616656 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.618653 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.619109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.619545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.619611 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.619656 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.642984 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g28gk\" (UniqueName: \"kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.721872 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.996323 4739 generic.go:334] "Generic (PLEG): container finished" podID="c8da5917-a0c7-4e03-b13a-5d3af63e49bd" containerID="ce49abdf77aa797d6c92f537a94ec8d2d9cf907c3c3ab08afab79bb008fd5d6a" exitCode=0 Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.997026 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70e6-account-create-update-k6c57" event={"ID":"c8da5917-a0c7-4e03-b13a-5d3af63e49bd","Type":"ContainerDied","Data":"ce49abdf77aa797d6c92f537a94ec8d2d9cf907c3c3ab08afab79bb008fd5d6a"} Jan 21 15:46:25 crc kubenswrapper[4739]: I0121 15:46:25.104434 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g28pm" podUID="614c729f-eac4-4445-bfdd-750236431c69" containerName="ovn-controller" probeResult="failure" output=< Jan 21 15:46:25 crc kubenswrapper[4739]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 15:46:25 crc kubenswrapper[4739]: > Jan 21 15:46:30 crc kubenswrapper[4739]: W0121 15:46:30.000026 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6589cf07_234c_4ade_ad9b_8525147c0c5e.slice/crio-a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c WatchSource:0}: Error finding container a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c: Status 404 returned error can't find the container with id a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c Jan 21 15:46:30 crc kubenswrapper[4739]: W0121 15:46:30.001681 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda19632c0_51a3_472e_a64c_33e82057e0aa.slice/crio-f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d WatchSource:0}: Error finding container f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d: Status 404 returned error can't find the container with id f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d Jan 21 15:46:30 crc kubenswrapper[4739]: W0121 15:46:30.006162 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabe3c507_7436_4ea4_8e4b_ad0879e1eb3c.slice/crio-b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5 WatchSource:0}: Error finding container b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5: Status 404 returned error can't find the container with id b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5 Jan 21 15:46:30 crc kubenswrapper[4739]: E0121 15:46:30.064828 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 21 15:46:30 crc kubenswrapper[4739]: E0121 15:46:30.065376 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwgjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-jp27h_openstack(1f3d6499-baea-49df-8dab-393a192e0a6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:46:30 crc kubenswrapper[4739]: E0121 15:46:30.069598 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-jp27h" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.069846 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-965e-account-create-update-plfg9" event={"ID":"a19632c0-51a3-472e-a64c-33e82057e0aa","Type":"ContainerStarted","Data":"f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.073039 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hr5n6" event={"ID":"b8a0eafc-020a-44b3-a392-6b8eea12109e","Type":"ContainerDied","Data":"ad8fd799a937282f521d8ebb6b6ca14e2d67cbc425c5f236a89fb4400f445dfc"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.073085 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad8fd799a937282f521d8ebb6b6ca14e2d67cbc425c5f236a89fb4400f445dfc" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.074265 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70e6-account-create-update-k6c57" event={"ID":"c8da5917-a0c7-4e03-b13a-5d3af63e49bd","Type":"ContainerDied","Data":"9809a73f2e63224e5b6ab5e829acc6a6c9b325dd6488ecbbb9400e468a7145dc"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.074291 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9809a73f2e63224e5b6ab5e829acc6a6c9b325dd6488ecbbb9400e468a7145dc" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.075596 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e253-account-create-update-h4rrg" event={"ID":"6589cf07-234c-4ade-ad9b-8525147c0c5e","Type":"ContainerStarted","Data":"a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.076458 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kldms" event={"ID":"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c","Type":"ContainerStarted","Data":"b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.077275 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lwrxr" event={"ID":"c3b6e9ee-dc03-4f47-a467-68d20988d0d5","Type":"ContainerStarted","Data":"82cb416fbddc04378f6adc46310325d4059b785c23f12a2e53670c4161fbbbea"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.078358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5xglw" event={"ID":"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0","Type":"ContainerDied","Data":"07c454e3f29da56cb6d1a292d6686cba1cee36ad9a1795adaabcb7016367e8f6"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.078378 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c454e3f29da56cb6d1a292d6686cba1cee36ad9a1795adaabcb7016367e8f6" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.160713 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g28pm" podUID="614c729f-eac4-4445-bfdd-750236431c69" containerName="ovn-controller" probeResult="failure" output=< Jan 21 15:46:30 crc kubenswrapper[4739]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 15:46:30 crc kubenswrapper[4739]: > Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.225072 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.322617 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf92z\" (UniqueName: \"kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z\") pod \"b8a0eafc-020a-44b3-a392-6b8eea12109e\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.322684 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts\") pod \"b8a0eafc-020a-44b3-a392-6b8eea12109e\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.324404 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8a0eafc-020a-44b3-a392-6b8eea12109e" (UID: "b8a0eafc-020a-44b3-a392-6b8eea12109e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.325721 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.334302 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z" (OuterVolumeSpecName: "kube-api-access-hf92z") pod "b8a0eafc-020a-44b3-a392-6b8eea12109e" (UID: "b8a0eafc-020a-44b3-a392-6b8eea12109e"). InnerVolumeSpecName "kube-api-access-hf92z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.404379 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.428289 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf92z\" (UniqueName: \"kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.445948 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.531410 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts\") pod \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.531877 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8w8x\" (UniqueName: \"kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x\") pod \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.531940 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42gnv\" (UniqueName: \"kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv\") pod \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.532095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8da5917-a0c7-4e03-b13a-5d3af63e49bd" (UID: "c8da5917-a0c7-4e03-b13a-5d3af63e49bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.532121 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts\") pod \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.532478 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" (UID: "3ac9d6dc-ff88-40f3-95a4-334dad6cabc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.533059 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.533091 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.536441 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x" (OuterVolumeSpecName: "kube-api-access-l8w8x") pod "3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" (UID: "3ac9d6dc-ff88-40f3-95a4-334dad6cabc0"). InnerVolumeSpecName "kube-api-access-l8w8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.536591 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv" (OuterVolumeSpecName: "kube-api-access-42gnv") pod "c8da5917-a0c7-4e03-b13a-5d3af63e49bd" (UID: "c8da5917-a0c7-4e03-b13a-5d3af63e49bd"). InnerVolumeSpecName "kube-api-access-42gnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.548686 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g28pm-config-wthq5"] Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.635111 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8w8x\" (UniqueName: \"kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.635143 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42gnv\" (UniqueName: \"kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.094806 4739 generic.go:334] "Generic (PLEG): container finished" podID="a19632c0-51a3-472e-a64c-33e82057e0aa" containerID="5737c6a9e8db5e392a7a9da187f639727602f93c4c9f19c9b11ba4c41ca4ee61" exitCode=0 Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.095159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-965e-account-create-update-plfg9" event={"ID":"a19632c0-51a3-472e-a64c-33e82057e0aa","Type":"ContainerDied","Data":"5737c6a9e8db5e392a7a9da187f639727602f93c4c9f19c9b11ba4c41ca4ee61"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.096800 4739 generic.go:334] "Generic (PLEG): container finished" podID="6589cf07-234c-4ade-ad9b-8525147c0c5e" containerID="d28a5056748fd0798e548eead6f029d14186c37e5aff84b6c64ff0b00b3f97a6" exitCode=0 Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.096937 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e253-account-create-update-h4rrg" event={"ID":"6589cf07-234c-4ade-ad9b-8525147c0c5e","Type":"ContainerDied","Data":"d28a5056748fd0798e548eead6f029d14186c37e5aff84b6c64ff0b00b3f97a6"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.102161 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm-config-wthq5" event={"ID":"4ab1c66a-4b45-4ecf-a216-9b189847dc46","Type":"ContainerStarted","Data":"e37b1e761d750a12e55f660697a2121e6853eaa8c220d4d98e18cd4f531d6534"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.102237 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm-config-wthq5" event={"ID":"4ab1c66a-4b45-4ecf-a216-9b189847dc46","Type":"ContainerStarted","Data":"fe956a36c3ad5d821945efa18bb514f142fe782f94fdf4020029d67f30e056ed"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.105052 4739 generic.go:334] "Generic (PLEG): container finished" podID="5f5e4610-5432-4990-9e2b-a2d084e8316f" containerID="ab9715eff2cb5eae5927f0214265318bbcc26cd2d7c73436a080a561302a86e4" exitCode=0 Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.105132 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnjht" event={"ID":"5f5e4610-5432-4990-9e2b-a2d084e8316f","Type":"ContainerDied","Data":"ab9715eff2cb5eae5927f0214265318bbcc26cd2d7c73436a080a561302a86e4"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.108232 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lwrxr" event={"ID":"c3b6e9ee-dc03-4f47-a467-68d20988d0d5","Type":"ContainerDied","Data":"af68ca059d6c0ec949ea589740194d780f4a64571719339be11dc4fd39d8cccd"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.108268 4739 generic.go:334] "Generic (PLEG): container finished" podID="c3b6e9ee-dc03-4f47-a467-68d20988d0d5" containerID="af68ca059d6c0ec949ea589740194d780f4a64571719339be11dc4fd39d8cccd" exitCode=0 Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.108384 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.108404 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.108406 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:31 crc kubenswrapper[4739]: E0121 15:46:31.110503 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-jp27h" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.168258 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-g28pm-config-wthq5" podStartSLOduration=11.168235688 podStartE2EDuration="11.168235688s" podCreationTimestamp="2026-01-21 15:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:31.16026364 +0000 UTC m=+1222.850969914" watchObservedRunningTime="2026-01-21 15:46:31.168235688 +0000 UTC m=+1222.858941952" Jan 21 15:46:32 crc kubenswrapper[4739]: I0121 15:46:32.121272 4739 generic.go:334] "Generic (PLEG): container finished" podID="4ab1c66a-4b45-4ecf-a216-9b189847dc46" containerID="e37b1e761d750a12e55f660697a2121e6853eaa8c220d4d98e18cd4f531d6534" exitCode=0 Jan 21 15:46:32 crc kubenswrapper[4739]: I0121 15:46:32.121473 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm-config-wthq5" event={"ID":"4ab1c66a-4b45-4ecf-a216-9b189847dc46","Type":"ContainerDied","Data":"e37b1e761d750a12e55f660697a2121e6853eaa8c220d4d98e18cd4f531d6534"} Jan 21 15:46:35 crc kubenswrapper[4739]: I0121 15:46:35.106522 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-g28pm" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.701654 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.716324 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.727018 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.735596 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.738109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ptpz\" (UniqueName: \"kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz\") pod \"5f5e4610-5432-4990-9e2b-a2d084e8316f\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.738258 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts\") pod \"6589cf07-234c-4ade-ad9b-8525147c0c5e\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.738410 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcphh\" (UniqueName: \"kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh\") pod \"6589cf07-234c-4ade-ad9b-8525147c0c5e\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.738560 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts\") pod \"5f5e4610-5432-4990-9e2b-a2d084e8316f\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.739711 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6589cf07-234c-4ade-ad9b-8525147c0c5e" (UID: "6589cf07-234c-4ade-ad9b-8525147c0c5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.739788 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f5e4610-5432-4990-9e2b-a2d084e8316f" (UID: "5f5e4610-5432-4990-9e2b-a2d084e8316f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.747020 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz" (OuterVolumeSpecName: "kube-api-access-2ptpz") pod "5f5e4610-5432-4990-9e2b-a2d084e8316f" (UID: "5f5e4610-5432-4990-9e2b-a2d084e8316f"). InnerVolumeSpecName "kube-api-access-2ptpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.747513 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.754641 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh" (OuterVolumeSpecName: "kube-api-access-qcphh") pod "6589cf07-234c-4ade-ad9b-8525147c0c5e" (UID: "6589cf07-234c-4ade-ad9b-8525147c0c5e"). InnerVolumeSpecName "kube-api-access-qcphh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.841293 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842610 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkqfm\" (UniqueName: \"kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm\") pod \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842650 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g28gk\" (UniqueName: \"kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842685 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndc2t\" (UniqueName: \"kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t\") pod \"a19632c0-51a3-472e-a64c-33e82057e0aa\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842726 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842803 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842849 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842898 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts\") pod \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842936 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts\") pod \"a19632c0-51a3-472e-a64c-33e82057e0aa\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.843544 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ptpz\" (UniqueName: \"kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.843559 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.843571 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcphh\" (UniqueName: \"kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.843583 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842133 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.844024 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a19632c0-51a3-472e-a64c-33e82057e0aa" (UID: "a19632c0-51a3-472e-a64c-33e82057e0aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.844580 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.844641 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.844666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run" (OuterVolumeSpecName: "var-run") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.845512 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts" (OuterVolumeSpecName: "scripts") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.848196 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3b6e9ee-dc03-4f47-a467-68d20988d0d5" (UID: "c3b6e9ee-dc03-4f47-a467-68d20988d0d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.858747 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t" (OuterVolumeSpecName: "kube-api-access-ndc2t") pod "a19632c0-51a3-472e-a64c-33e82057e0aa" (UID: "a19632c0-51a3-472e-a64c-33e82057e0aa"). InnerVolumeSpecName "kube-api-access-ndc2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.863571 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm" (OuterVolumeSpecName: "kube-api-access-kkqfm") pod "c3b6e9ee-dc03-4f47-a467-68d20988d0d5" (UID: "c3b6e9ee-dc03-4f47-a467-68d20988d0d5"). InnerVolumeSpecName "kube-api-access-kkqfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.864985 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk" (OuterVolumeSpecName: "kube-api-access-g28gk") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "kube-api-access-g28gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945250 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945386 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945405 4739 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945419 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkqfm\" (UniqueName: \"kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945437 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g28gk\" (UniqueName: \"kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945451 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndc2t\" (UniqueName: \"kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945465 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945481 4739 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945494 4739 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945505 4739 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.205184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e253-account-create-update-h4rrg" event={"ID":"6589cf07-234c-4ade-ad9b-8525147c0c5e","Type":"ContainerDied","Data":"a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.205238 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.205326 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.209731 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm-config-wthq5" event={"ID":"4ab1c66a-4b45-4ecf-a216-9b189847dc46","Type":"ContainerDied","Data":"fe956a36c3ad5d821945efa18bb514f142fe782f94fdf4020029d67f30e056ed"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.209787 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe956a36c3ad5d821945efa18bb514f142fe782f94fdf4020029d67f30e056ed" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.209873 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.214689 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnjht" event={"ID":"5f5e4610-5432-4990-9e2b-a2d084e8316f","Type":"ContainerDied","Data":"fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.214925 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.216189 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.216366 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kldms" event={"ID":"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c","Type":"ContainerStarted","Data":"50d05f03f720af7c93636914d1c590aa30bf94e8f4d51a72d3c27191376e94e2"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.222410 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lwrxr" event={"ID":"c3b6e9ee-dc03-4f47-a467-68d20988d0d5","Type":"ContainerDied","Data":"82cb416fbddc04378f6adc46310325d4059b785c23f12a2e53670c4161fbbbea"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.222458 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82cb416fbddc04378f6adc46310325d4059b785c23f12a2e53670c4161fbbbea" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.223808 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.226590 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-965e-account-create-update-plfg9" event={"ID":"a19632c0-51a3-472e-a64c-33e82057e0aa","Type":"ContainerDied","Data":"f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.226644 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.226769 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.248069 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kldms" podStartSLOduration=12.423748151 podStartE2EDuration="19.247758156s" podCreationTimestamp="2026-01-21 15:46:18 +0000 UTC" firstStartedPulling="2026-01-21 15:46:30.008449867 +0000 UTC m=+1221.699156131" lastFinishedPulling="2026-01-21 15:46:36.832459872 +0000 UTC m=+1228.523166136" observedRunningTime="2026-01-21 15:46:37.237901787 +0000 UTC m=+1228.928608061" watchObservedRunningTime="2026-01-21 15:46:37.247758156 +0000 UTC m=+1228.938464420" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.902116 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g28pm-config-wthq5"] Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.909707 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-g28pm-config-wthq5"] Jan 21 15:46:38 crc kubenswrapper[4739]: I0121 15:46:38.796171 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab1c66a-4b45-4ecf-a216-9b189847dc46" path="/var/lib/kubelet/pods/4ab1c66a-4b45-4ecf-a216-9b189847dc46/volumes" Jan 21 15:46:44 crc kubenswrapper[4739]: I0121 15:46:44.300442 4739 generic.go:334] "Generic (PLEG): container finished" podID="abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" containerID="50d05f03f720af7c93636914d1c590aa30bf94e8f4d51a72d3c27191376e94e2" exitCode=0 Jan 21 15:46:44 crc kubenswrapper[4739]: I0121 15:46:44.300520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kldms" event={"ID":"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c","Type":"ContainerDied","Data":"50d05f03f720af7c93636914d1c590aa30bf94e8f4d51a72d3c27191376e94e2"} Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.653313 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.792421 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp42z\" (UniqueName: \"kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z\") pod \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.792526 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle\") pod \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.792663 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data\") pod \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.806291 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z" (OuterVolumeSpecName: "kube-api-access-wp42z") pod "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" (UID: "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c"). InnerVolumeSpecName "kube-api-access-wp42z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.824574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" (UID: "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.857937 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data" (OuterVolumeSpecName: "config-data") pod "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" (UID: "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.894345 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.894377 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp42z\" (UniqueName: \"kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.894392 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.319406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kldms" event={"ID":"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c","Type":"ContainerDied","Data":"b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5"} Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.319725 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.319481 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750130 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-m5v9h"] Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750425 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a0eafc-020a-44b3-a392-6b8eea12109e" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750436 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a0eafc-020a-44b3-a392-6b8eea12109e" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750450 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8da5917-a0c7-4e03-b13a-5d3af63e49bd" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750455 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8da5917-a0c7-4e03-b13a-5d3af63e49bd" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750465 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750470 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750479 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b6e9ee-dc03-4f47-a467-68d20988d0d5" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750484 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b6e9ee-dc03-4f47-a467-68d20988d0d5" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750497 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5e4610-5432-4990-9e2b-a2d084e8316f" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750502 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5e4610-5432-4990-9e2b-a2d084e8316f" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750514 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab1c66a-4b45-4ecf-a216-9b189847dc46" containerName="ovn-config" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750519 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab1c66a-4b45-4ecf-a216-9b189847dc46" containerName="ovn-config" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750531 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19632c0-51a3-472e-a64c-33e82057e0aa" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750537 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19632c0-51a3-472e-a64c-33e82057e0aa" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750544 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" containerName="keystone-db-sync" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750550 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" containerName="keystone-db-sync" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750561 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6589cf07-234c-4ade-ad9b-8525147c0c5e" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750567 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6589cf07-234c-4ade-ad9b-8525147c0c5e" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755031 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6589cf07-234c-4ade-ad9b-8525147c0c5e" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755051 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5e4610-5432-4990-9e2b-a2d084e8316f" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755060 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19632c0-51a3-472e-a64c-33e82057e0aa" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755067 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" containerName="keystone-db-sync" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755076 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8da5917-a0c7-4e03-b13a-5d3af63e49bd" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755087 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab1c66a-4b45-4ecf-a216-9b189847dc46" containerName="ovn-config" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755098 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755104 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a0eafc-020a-44b3-a392-6b8eea12109e" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755114 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b6e9ee-dc03-4f47-a467-68d20988d0d5" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755622 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.773243 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.773381 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.773385 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.773663 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.790855 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p8xc6" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.806777 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.806866 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48kf\" (UniqueName: \"kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.806896 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.806919 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.806957 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.807007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.833392 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m5v9h"] Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.833428 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.834966 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.842733 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.908930 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909069 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909129 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48kf\" (UniqueName: \"kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909196 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909226 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909300 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909339 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909371 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psbpw\" (UniqueName: \"kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909453 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.921303 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.923555 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.924191 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.935666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.945031 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.980466 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48kf\" (UniqueName: \"kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.010487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.010748 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.010838 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psbpw\" (UniqueName: \"kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.010946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.011021 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.011832 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.012454 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.013293 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.014111 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.064203 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psbpw\" (UniqueName: \"kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.093992 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.105649 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.117075 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.120710 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.120904 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.149970 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.199240 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.213790 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.213843 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.213862 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.213903 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcdrs\" (UniqueName: \"kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.213929 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.214025 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.214047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315198 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315261 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315301 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315323 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315347 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcdrs\" (UniqueName: \"kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315430 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.316035 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.316342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.331024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.335687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.350388 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.353455 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-r5znj"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.373098 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.374257 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-r5znj"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.382491 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.394571 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nsbps" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.397495 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.397586 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.421225 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.421347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2sbf\" (UniqueName: \"kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.421384 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.429321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jp27h" event={"ID":"1f3d6499-baea-49df-8dab-393a192e0a6b","Type":"ContainerStarted","Data":"6ed86ff4645a0717cf253d999a5012187a4891a7826b6fe88297ab0c2a16d7ac"} Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.441238 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcdrs\" (UniqueName: \"kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.456996 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-gj9fz"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.457970 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.462575 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-96lt9"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.463669 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484018 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4sncj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484195 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484252 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484297 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bcvzr" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484401 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484503 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.506892 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525207 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2sbf\" (UniqueName: \"kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525294 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525316 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mklw\" (UniqueName: \"kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525384 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525402 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525460 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525501 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2jh4\" (UniqueName: \"kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.538233 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.539618 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.543716 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gj9fz"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.664987 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665353 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665411 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mklw\" (UniqueName: \"kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665695 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665828 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665922 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665956 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2jh4\" (UniqueName: \"kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.666028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.674237 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2sbf\" (UniqueName: \"kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.676352 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.691787 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.697473 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.698199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.698992 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.704118 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.704203 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.721803 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mklw\" (UniqueName: \"kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.734347 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2jh4\" (UniqueName: \"kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.751770 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-96lt9"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.760061 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-xwk5p"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.761075 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.763438 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.768266 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jp27h" podStartSLOduration=3.945587395 podStartE2EDuration="37.768245178s" podCreationTimestamp="2026-01-21 15:46:10 +0000 UTC" firstStartedPulling="2026-01-21 15:46:11.385860751 +0000 UTC m=+1203.076567015" lastFinishedPulling="2026-01-21 15:46:45.208518534 +0000 UTC m=+1236.899224798" observedRunningTime="2026-01-21 15:46:47.560220165 +0000 UTC m=+1239.250926439" watchObservedRunningTime="2026-01-21 15:46:47.768245178 +0000 UTC m=+1239.458951442" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.769848 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zgf5q" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.769889 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.771718 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.821005 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.826287 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.833138 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.842432 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xwk5p"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.852037 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.878848 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.878902 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.879020 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.879122 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.879152 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.879774 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981211 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981233 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981263 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981287 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981345 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981391 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981416 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqj2v\" (UniqueName: \"kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.982212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.992421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.992687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.992858 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.012997 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.082878 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.083278 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqj2v\" (UniqueName: \"kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.083393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.083419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.083438 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.084382 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.084929 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.088475 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.089169 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.098057 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.103601 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqj2v\" (UniqueName: \"kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.204358 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.242175 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m5v9h"] Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.258370 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.337579 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.502986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" event={"ID":"4e7f4af0-293d-48d2-84da-ebb62e612fb2","Type":"ContainerStarted","Data":"02fdfa299ce4dd3cbc7fac3167b48e86e3bbcfe9f2b346e5590415eba1c98571"} Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.531270 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m5v9h" event={"ID":"626eb09e-01c2-4ef6-8812-2d160e90a113","Type":"ContainerStarted","Data":"4f5b1052d6deeb5820616e83f88dfc99c5faa2361aea4ea7321febe580add5b6"} Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.614611 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-r5znj"] Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.889966 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-96lt9"] Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.994584 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gj9fz"] Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.175797 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xwk5p"] Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.208558 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.581379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5znj" event={"ID":"b1635150-ea8b-4b37-b129-7ade970b52ee","Type":"ContainerStarted","Data":"72e20bece7d457dfe26cae2233b3f23885681f4d1b39178d8953cf117a853bc0"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.591843 4739 generic.go:334] "Generic (PLEG): container finished" podID="4e7f4af0-293d-48d2-84da-ebb62e612fb2" containerID="d71ba0de835d068d31d211beec3660bb4e5be0c8382106acdad76895e50f130f" exitCode=0 Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.591963 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" event={"ID":"4e7f4af0-293d-48d2-84da-ebb62e612fb2","Type":"ContainerDied","Data":"d71ba0de835d068d31d211beec3660bb4e5be0c8382106acdad76895e50f130f"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.600117 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-96lt9" event={"ID":"a80f8b10-47b3-4590-95be-4468cea2f9c0","Type":"ContainerStarted","Data":"c5196bf25d5857ba6a25f29fd0aef43035a6e6a1d7c067de217105c426d8d9cd"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.602310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m5v9h" event={"ID":"626eb09e-01c2-4ef6-8812-2d160e90a113","Type":"ContainerStarted","Data":"90009f7b34730ca27e064de96b8ae6bbb3e5323e5202e1238816fdc37b06b514"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.612409 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerStarted","Data":"7211b1d26178cb64e4faaf584f0788cadfa23e148dc68767018276c936da671e"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.634972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xwk5p" event={"ID":"d84721a4-d079-460e-8fc5-064ea758d676","Type":"ContainerStarted","Data":"04858cd2d6d9267978b456e53f14c5c64f13228c3dfa7e1f58d01b68a56abd73"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.645118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" event={"ID":"2a622ecf-b73e-4104-8ab5-c60fea198474","Type":"ContainerStarted","Data":"2944760882b05c708f270896329b53b5ff2a4da1eec8a53b5962df9cab5a1dd9"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.657091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj9fz" event={"ID":"34449cf3-049d-453b-ab88-ab40fdc25d6c","Type":"ContainerStarted","Data":"bd0a019a37919c8b2d755da31b38b011b3ac9cfa6f01caccc84ca0777470260c"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.702301 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-m5v9h" podStartSLOduration=3.702282291 podStartE2EDuration="3.702282291s" podCreationTimestamp="2026-01-21 15:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:49.700174874 +0000 UTC m=+1241.390881158" watchObservedRunningTime="2026-01-21 15:46:49.702282291 +0000 UTC m=+1241.392988555" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.075941 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.176392 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc\") pod \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.176439 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config\") pod \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.176572 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb\") pod \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.176624 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psbpw\" (UniqueName: \"kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw\") pod \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.176650 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb\") pod \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.206083 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw" (OuterVolumeSpecName: "kube-api-access-psbpw") pod "4e7f4af0-293d-48d2-84da-ebb62e612fb2" (UID: "4e7f4af0-293d-48d2-84da-ebb62e612fb2"). InnerVolumeSpecName "kube-api-access-psbpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.253362 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e7f4af0-293d-48d2-84da-ebb62e612fb2" (UID: "4e7f4af0-293d-48d2-84da-ebb62e612fb2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.260963 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e7f4af0-293d-48d2-84da-ebb62e612fb2" (UID: "4e7f4af0-293d-48d2-84da-ebb62e612fb2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.279372 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.279408 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psbpw\" (UniqueName: \"kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.279424 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.288883 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.289613 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config" (OuterVolumeSpecName: "config") pod "4e7f4af0-293d-48d2-84da-ebb62e612fb2" (UID: "4e7f4af0-293d-48d2-84da-ebb62e612fb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.301181 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e7f4af0-293d-48d2-84da-ebb62e612fb2" (UID: "4e7f4af0-293d-48d2-84da-ebb62e612fb2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.381844 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.381881 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.675512 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerID="5c3a9f6b8ee8e424c97637acf52e19d40081ea480347a9c867edcc32fb595b79" exitCode=0 Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.675666 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" event={"ID":"2a622ecf-b73e-4104-8ab5-c60fea198474","Type":"ContainerDied","Data":"5c3a9f6b8ee8e424c97637acf52e19d40081ea480347a9c867edcc32fb595b79"} Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.680304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5znj" event={"ID":"b1635150-ea8b-4b37-b129-7ade970b52ee","Type":"ContainerStarted","Data":"b2a14f9f0596b7114bc9be07e6d7387e73ae65d715e86a7eab8f4b3ca063b86f"} Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.691590 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.693856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" event={"ID":"4e7f4af0-293d-48d2-84da-ebb62e612fb2","Type":"ContainerDied","Data":"02fdfa299ce4dd3cbc7fac3167b48e86e3bbcfe9f2b346e5590415eba1c98571"} Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.693927 4739 scope.go:117] "RemoveContainer" containerID="d71ba0de835d068d31d211beec3660bb4e5be0c8382106acdad76895e50f130f" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.753760 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-r5znj" podStartSLOduration=3.75373793 podStartE2EDuration="3.75373793s" podCreationTimestamp="2026-01-21 15:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:50.747909862 +0000 UTC m=+1242.438616126" watchObservedRunningTime="2026-01-21 15:46:50.75373793 +0000 UTC m=+1242.444444194" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.862303 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.868350 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:51 crc kubenswrapper[4739]: I0121 15:46:51.728040 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" event={"ID":"2a622ecf-b73e-4104-8ab5-c60fea198474","Type":"ContainerStarted","Data":"e4a303fe13e88a08cc4fb148c52a17956e03f955dee54aa65dda00a77f041d95"} Jan 21 15:46:51 crc kubenswrapper[4739]: I0121 15:46:51.778376 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" podStartSLOduration=4.778354308 podStartE2EDuration="4.778354308s" podCreationTimestamp="2026-01-21 15:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:51.777227647 +0000 UTC m=+1243.467933911" watchObservedRunningTime="2026-01-21 15:46:51.778354308 +0000 UTC m=+1243.469060572" Jan 21 15:46:52 crc kubenswrapper[4739]: I0121 15:46:52.742985 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:52 crc kubenswrapper[4739]: I0121 15:46:52.798140 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e7f4af0-293d-48d2-84da-ebb62e612fb2" path="/var/lib/kubelet/pods/4e7f4af0-293d-48d2-84da-ebb62e612fb2/volumes" Jan 21 15:46:58 crc kubenswrapper[4739]: I0121 15:46:58.207518 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:58 crc kubenswrapper[4739]: I0121 15:46:58.289133 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:46:58 crc kubenswrapper[4739]: I0121 15:46:58.289371 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-64gmb" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" containerID="cri-o://e88af91d76411e4a9d0f66185bd59b8144edcc60ec5e589ac5146b2d5830e5c7" gracePeriod=10 Jan 21 15:46:58 crc kubenswrapper[4739]: I0121 15:46:58.801299 4739 generic.go:334] "Generic (PLEG): container finished" podID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerID="e88af91d76411e4a9d0f66185bd59b8144edcc60ec5e589ac5146b2d5830e5c7" exitCode=0 Jan 21 15:46:58 crc kubenswrapper[4739]: I0121 15:46:58.806718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-64gmb" event={"ID":"5f37975f-9bd3-4ae2-af25-af5f12096d34","Type":"ContainerDied","Data":"e88af91d76411e4a9d0f66185bd59b8144edcc60ec5e589ac5146b2d5830e5c7"} Jan 21 15:46:59 crc kubenswrapper[4739]: I0121 15:46:59.814859 4739 generic.go:334] "Generic (PLEG): container finished" podID="626eb09e-01c2-4ef6-8812-2d160e90a113" containerID="90009f7b34730ca27e064de96b8ae6bbb3e5323e5202e1238816fdc37b06b514" exitCode=0 Jan 21 15:46:59 crc kubenswrapper[4739]: I0121 15:46:59.814923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m5v9h" event={"ID":"626eb09e-01c2-4ef6-8812-2d160e90a113","Type":"ContainerDied","Data":"90009f7b34730ca27e064de96b8ae6bbb3e5323e5202e1238816fdc37b06b514"} Jan 21 15:47:02 crc kubenswrapper[4739]: I0121 15:47:02.140050 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-64gmb" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.463451 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504362 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504439 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f48kf\" (UniqueName: \"kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504471 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504532 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504701 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504756 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.513925 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.514106 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts" (OuterVolumeSpecName: "scripts") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.519163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf" (OuterVolumeSpecName: "kube-api-access-f48kf") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "kube-api-access-f48kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.529350 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.536003 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data" (OuterVolumeSpecName: "config-data") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.549034 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607176 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607216 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607227 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607235 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f48kf\" (UniqueName: \"kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607245 4739 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607253 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.949327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m5v9h" event={"ID":"626eb09e-01c2-4ef6-8812-2d160e90a113","Type":"ContainerDied","Data":"4f5b1052d6deeb5820616e83f88dfc99c5faa2361aea4ea7321febe580add5b6"} Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.949367 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f5b1052d6deeb5820616e83f88dfc99c5faa2361aea4ea7321febe580add5b6" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.949379 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.012031 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.012228 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mklw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-96lt9_openstack(a80f8b10-47b3-4590-95be-4468cea2f9c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.013427 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-96lt9" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.047334 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.116989 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc\") pod \"5f37975f-9bd3-4ae2-af25-af5f12096d34\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.117132 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config\") pod \"5f37975f-9bd3-4ae2-af25-af5f12096d34\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.117198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lz49\" (UniqueName: \"kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49\") pod \"5f37975f-9bd3-4ae2-af25-af5f12096d34\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.117238 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb\") pod \"5f37975f-9bd3-4ae2-af25-af5f12096d34\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.117320 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb\") pod \"5f37975f-9bd3-4ae2-af25-af5f12096d34\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.125034 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49" (OuterVolumeSpecName: "kube-api-access-4lz49") pod "5f37975f-9bd3-4ae2-af25-af5f12096d34" (UID: "5f37975f-9bd3-4ae2-af25-af5f12096d34"). InnerVolumeSpecName "kube-api-access-4lz49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.141604 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-64gmb" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.166162 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f37975f-9bd3-4ae2-af25-af5f12096d34" (UID: "5f37975f-9bd3-4ae2-af25-af5f12096d34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.167030 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f37975f-9bd3-4ae2-af25-af5f12096d34" (UID: "5f37975f-9bd3-4ae2-af25-af5f12096d34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.168031 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config" (OuterVolumeSpecName: "config") pod "5f37975f-9bd3-4ae2-af25-af5f12096d34" (UID: "5f37975f-9bd3-4ae2-af25-af5f12096d34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.169663 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f37975f-9bd3-4ae2-af25-af5f12096d34" (UID: "5f37975f-9bd3-4ae2-af25-af5f12096d34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.219291 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lz49\" (UniqueName: \"kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.219329 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.219342 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.219354 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.219367 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.612709 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-m5v9h"] Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.620447 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-m5v9h"] Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.707454 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kdx4k"] Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.707810 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7f4af0-293d-48d2-84da-ebb62e612fb2" containerName="init" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.707847 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7f4af0-293d-48d2-84da-ebb62e612fb2" containerName="init" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.707867 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626eb09e-01c2-4ef6-8812-2d160e90a113" containerName="keystone-bootstrap" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.707880 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="626eb09e-01c2-4ef6-8812-2d160e90a113" containerName="keystone-bootstrap" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.707890 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.707898 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.707918 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="init" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.707926 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="init" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.708581 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.708603 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="626eb09e-01c2-4ef6-8812-2d160e90a113" containerName="keystone-bootstrap" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.708618 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e7f4af0-293d-48d2-84da-ebb62e612fb2" containerName="init" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.709393 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.711932 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.712209 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.712395 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p8xc6" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.712574 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.712923 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726170 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726263 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726281 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726314 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726333 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726372 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6rgv\" (UniqueName: \"kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.727313 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kdx4k"] Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.796635 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626eb09e-01c2-4ef6-8812-2d160e90a113" path="/var/lib/kubelet/pods/626eb09e-01c2-4ef6-8812-2d160e90a113/volumes" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.829728 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.830057 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.830087 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.830127 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.830151 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.830268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6rgv\" (UniqueName: \"kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.839129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.844959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.845035 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.850624 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6rgv\" (UniqueName: \"kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.852806 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.854202 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.961901 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.962077 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-64gmb" event={"ID":"5f37975f-9bd3-4ae2-af25-af5f12096d34","Type":"ContainerDied","Data":"f3866bd1987850b814a71cc9f4ffd263e91998c5ef115699f5edf4496b25b256"} Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.962528 4739 scope.go:117] "RemoveContainer" containerID="e88af91d76411e4a9d0f66185bd59b8144edcc60ec5e589ac5146b2d5830e5c7" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.964321 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-96lt9" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.010381 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.017587 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.044140 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.404756 4739 scope.go:117] "RemoveContainer" containerID="e91e79ee3fa6d87120f0261dc55689054264d41e3602ead19857a8d28c0bf427" Jan 21 15:47:13 crc kubenswrapper[4739]: E0121 15:47:13.469237 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 21 15:47:13 crc kubenswrapper[4739]: E0121 15:47:13.469655 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2jh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-gj9fz_openstack(34449cf3-049d-453b-ab88-ab40fdc25d6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:47:13 crc kubenswrapper[4739]: E0121 15:47:13.470929 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-gj9fz" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" Jan 21 15:47:13 crc kubenswrapper[4739]: W0121 15:47:13.900203 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b853447_6a81_4b1e_b26c_cefc48c32a81.slice/crio-7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc WatchSource:0}: Error finding container 7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc: Status 404 returned error can't find the container with id 7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.901300 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kdx4k"] Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.974022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdx4k" event={"ID":"3b853447-6a81-4b1e-b26c-cefc48c32a81","Type":"ContainerStarted","Data":"7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc"} Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.976256 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerStarted","Data":"e02d70af3a4e3e702b77dd7596ad641c6c72f26f066963eda08608155c031951"} Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.978508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xwk5p" event={"ID":"d84721a4-d079-460e-8fc5-064ea758d676","Type":"ContainerStarted","Data":"71310695c2accfa3e4a3d2aec57ac7da81de4787cbc5f9e497bf705de369d619"} Jan 21 15:47:13 crc kubenswrapper[4739]: E0121 15:47:13.980672 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-gj9fz" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" Jan 21 15:47:14 crc kubenswrapper[4739]: I0121 15:47:14.005754 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-xwk5p" podStartSLOduration=2.801390123 podStartE2EDuration="27.00573606s" podCreationTimestamp="2026-01-21 15:46:47 +0000 UTC" firstStartedPulling="2026-01-21 15:46:49.2008954 +0000 UTC m=+1240.891601654" lastFinishedPulling="2026-01-21 15:47:13.405241327 +0000 UTC m=+1265.095947591" observedRunningTime="2026-01-21 15:47:14.002379538 +0000 UTC m=+1265.693085812" watchObservedRunningTime="2026-01-21 15:47:14.00573606 +0000 UTC m=+1265.696442314" Jan 21 15:47:14 crc kubenswrapper[4739]: I0121 15:47:14.794101 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" path="/var/lib/kubelet/pods/5f37975f-9bd3-4ae2-af25-af5f12096d34/volumes" Jan 21 15:47:14 crc kubenswrapper[4739]: I0121 15:47:14.997779 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdx4k" event={"ID":"3b853447-6a81-4b1e-b26c-cefc48c32a81","Type":"ContainerStarted","Data":"c5191c489da39b3d63d1ce6095ac375b0c57a0b0c80cbb56abcdfe58ddbad922"} Jan 21 15:47:15 crc kubenswrapper[4739]: I0121 15:47:15.021778 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kdx4k" podStartSLOduration=3.021763413 podStartE2EDuration="3.021763413s" podCreationTimestamp="2026-01-21 15:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:15.017754274 +0000 UTC m=+1266.708460538" watchObservedRunningTime="2026-01-21 15:47:15.021763413 +0000 UTC m=+1266.712469667" Jan 21 15:47:16 crc kubenswrapper[4739]: I0121 15:47:16.005726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerStarted","Data":"44b48ce759ea7bb448551711d1fca8cd6ba170fa42dfc430aedcbe8f84232bca"} Jan 21 15:47:19 crc kubenswrapper[4739]: I0121 15:47:19.030504 4739 generic.go:334] "Generic (PLEG): container finished" podID="3b853447-6a81-4b1e-b26c-cefc48c32a81" containerID="c5191c489da39b3d63d1ce6095ac375b0c57a0b0c80cbb56abcdfe58ddbad922" exitCode=0 Jan 21 15:47:19 crc kubenswrapper[4739]: I0121 15:47:19.030552 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdx4k" event={"ID":"3b853447-6a81-4b1e-b26c-cefc48c32a81","Type":"ContainerDied","Data":"c5191c489da39b3d63d1ce6095ac375b0c57a0b0c80cbb56abcdfe58ddbad922"} Jan 21 15:47:20 crc kubenswrapper[4739]: I0121 15:47:20.040498 4739 generic.go:334] "Generic (PLEG): container finished" podID="d84721a4-d079-460e-8fc5-064ea758d676" containerID="71310695c2accfa3e4a3d2aec57ac7da81de4787cbc5f9e497bf705de369d619" exitCode=0 Jan 21 15:47:20 crc kubenswrapper[4739]: I0121 15:47:20.041006 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xwk5p" event={"ID":"d84721a4-d079-460e-8fc5-064ea758d676","Type":"ContainerDied","Data":"71310695c2accfa3e4a3d2aec57ac7da81de4787cbc5f9e497bf705de369d619"} Jan 21 15:47:32 crc kubenswrapper[4739]: I0121 15:47:32.933541 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xwk5p" Jan 21 15:47:32 crc kubenswrapper[4739]: I0121 15:47:32.940284 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072661 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs\") pod \"d84721a4-d079-460e-8fc5-064ea758d676\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072737 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts\") pod \"d84721a4-d079-460e-8fc5-064ea758d676\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072786 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072866 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6rgv\" (UniqueName: \"kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072899 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072977 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle\") pod \"d84721a4-d079-460e-8fc5-064ea758d676\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073029 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073057 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073103 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs" (OuterVolumeSpecName: "logs") pod "d84721a4-d079-460e-8fc5-064ea758d676" (UID: "d84721a4-d079-460e-8fc5-064ea758d676"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073200 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data\") pod \"d84721a4-d079-460e-8fc5-064ea758d676\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073227 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv\") pod \"d84721a4-d079-460e-8fc5-064ea758d676\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073896 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.079184 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv" (OuterVolumeSpecName: "kube-api-access-jtzlv") pod "d84721a4-d079-460e-8fc5-064ea758d676" (UID: "d84721a4-d079-460e-8fc5-064ea758d676"). InnerVolumeSpecName "kube-api-access-jtzlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.080866 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts" (OuterVolumeSpecName: "scripts") pod "d84721a4-d079-460e-8fc5-064ea758d676" (UID: "d84721a4-d079-460e-8fc5-064ea758d676"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.081221 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv" (OuterVolumeSpecName: "kube-api-access-l6rgv") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "kube-api-access-l6rgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.081417 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts" (OuterVolumeSpecName: "scripts") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.082018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.088223 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.098143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data" (OuterVolumeSpecName: "config-data") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.101926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data" (OuterVolumeSpecName: "config-data") pod "d84721a4-d079-460e-8fc5-064ea758d676" (UID: "d84721a4-d079-460e-8fc5-064ea758d676"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.102300 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d84721a4-d079-460e-8fc5-064ea758d676" (UID: "d84721a4-d079-460e-8fc5-064ea758d676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.102712 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.150991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xwk5p" event={"ID":"d84721a4-d079-460e-8fc5-064ea758d676","Type":"ContainerDied","Data":"04858cd2d6d9267978b456e53f14c5c64f13228c3dfa7e1f58d01b68a56abd73"} Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.151014 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xwk5p" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.151033 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04858cd2d6d9267978b456e53f14c5c64f13228c3dfa7e1f58d01b68a56abd73" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.154560 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdx4k" event={"ID":"3b853447-6a81-4b1e-b26c-cefc48c32a81","Type":"ContainerDied","Data":"7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc"} Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.154632 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.154708 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175887 4739 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175922 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175935 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175946 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175956 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175966 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6rgv\" (UniqueName: \"kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175977 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175987 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175999 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.176037 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: E0121 15:47:33.850871 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Jan 21 15:47:33 crc kubenswrapper[4739]: E0121 15:47:33.851029 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcdrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7284d869-b8de-4465-a987-4c9606dcdc74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.085422 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7bc6f68bbd-rrpp7"] Jan 21 15:47:34 crc kubenswrapper[4739]: E0121 15:47:34.086033 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84721a4-d079-460e-8fc5-064ea758d676" containerName="placement-db-sync" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.086045 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84721a4-d079-460e-8fc5-064ea758d676" containerName="placement-db-sync" Jan 21 15:47:34 crc kubenswrapper[4739]: E0121 15:47:34.086053 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b853447-6a81-4b1e-b26c-cefc48c32a81" containerName="keystone-bootstrap" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.086058 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b853447-6a81-4b1e-b26c-cefc48c32a81" containerName="keystone-bootstrap" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.086228 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84721a4-d079-460e-8fc5-064ea758d676" containerName="placement-db-sync" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.086253 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b853447-6a81-4b1e-b26c-cefc48c32a81" containerName="keystone-bootstrap" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.087050 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.090300 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zgf5q" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.090304 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.090393 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.091991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.094012 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.106203 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7bc6f68bbd-rrpp7"] Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.183421 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-755fb5c478-dt2rg"] Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.184763 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.187274 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.187491 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p8xc6" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.187684 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.188263 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.189807 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192081 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-public-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192128 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrsw\" (UniqueName: \"kubernetes.io/projected/ba66d45b-42e9-4ea8-91dc-9925178eaa65-kube-api-access-jfrsw\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba66d45b-42e9-4ea8-91dc-9925178eaa65-logs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192381 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-combined-ca-bundle\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192489 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-scripts\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192519 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-config-data\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192595 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-internal-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.198970 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.217441 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-755fb5c478-dt2rg"] Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-credential-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294569 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-internal-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-internal-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-scripts\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294644 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-public-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294666 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-config-data\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294684 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-public-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfrsw\" (UniqueName: \"kubernetes.io/projected/ba66d45b-42e9-4ea8-91dc-9925178eaa65-kube-api-access-jfrsw\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294726 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-fernet-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294771 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba66d45b-42e9-4ea8-91dc-9925178eaa65-logs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhsg\" (UniqueName: \"kubernetes.io/projected/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-kube-api-access-wqhsg\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294834 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-combined-ca-bundle\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294853 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-combined-ca-bundle\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294884 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-scripts\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294900 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-config-data\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.295732 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba66d45b-42e9-4ea8-91dc-9925178eaa65-logs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.308455 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-internal-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.309218 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-scripts\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.309294 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-public-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.311622 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-config-data\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.315338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfrsw\" (UniqueName: \"kubernetes.io/projected/ba66d45b-42e9-4ea8-91dc-9925178eaa65-kube-api-access-jfrsw\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.317004 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-combined-ca-bundle\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.396227 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqhsg\" (UniqueName: \"kubernetes.io/projected/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-kube-api-access-wqhsg\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.396516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-combined-ca-bundle\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.396668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-credential-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.396791 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-internal-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.396945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-scripts\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.397054 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-public-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.397162 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-config-data\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.397275 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-fernet-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.400930 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-credential-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.400930 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-config-data\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.401595 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-fernet-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.401858 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-combined-ca-bundle\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.404315 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-internal-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.404347 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-scripts\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.404880 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.410969 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-public-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.415746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqhsg\" (UniqueName: \"kubernetes.io/projected/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-kube-api-access-wqhsg\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.502231 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:35 crc kubenswrapper[4739]: I0121 15:47:35.661704 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7bc6f68bbd-rrpp7"] Jan 21 15:47:35 crc kubenswrapper[4739]: I0121 15:47:35.754353 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-755fb5c478-dt2rg"] Jan 21 15:47:36 crc kubenswrapper[4739]: I0121 15:47:36.185562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-755fb5c478-dt2rg" event={"ID":"5e665ce5-7f58-4b17-9ccf-3e641a34eae8","Type":"ContainerStarted","Data":"eadf16da49a3173442f24173c36befe12e6c572bbd0a99d1ca3d360de1a3ecfb"} Jan 21 15:47:36 crc kubenswrapper[4739]: I0121 15:47:36.187573 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-96lt9" event={"ID":"a80f8b10-47b3-4590-95be-4468cea2f9c0","Type":"ContainerStarted","Data":"a1a4d3d9065a56e43fab1158e27671c9ee273058ec06016997bfb034518c2cec"} Jan 21 15:47:36 crc kubenswrapper[4739]: I0121 15:47:36.188738 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bc6f68bbd-rrpp7" event={"ID":"ba66d45b-42e9-4ea8-91dc-9925178eaa65","Type":"ContainerStarted","Data":"12bbf00c9259895c828408ee1ebe3c27963429ce811942fe2556c4d59391553b"} Jan 21 15:47:36 crc kubenswrapper[4739]: I0121 15:47:36.208421 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-96lt9" podStartSLOduration=3.299674999 podStartE2EDuration="49.208403499s" podCreationTimestamp="2026-01-21 15:46:47 +0000 UTC" firstStartedPulling="2026-01-21 15:46:49.169601547 +0000 UTC m=+1240.860307811" lastFinishedPulling="2026-01-21 15:47:35.078330047 +0000 UTC m=+1286.769036311" observedRunningTime="2026-01-21 15:47:36.205975644 +0000 UTC m=+1287.896681928" watchObservedRunningTime="2026-01-21 15:47:36.208403499 +0000 UTC m=+1287.899109763" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.203359 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bc6f68bbd-rrpp7" event={"ID":"ba66d45b-42e9-4ea8-91dc-9925178eaa65","Type":"ContainerStarted","Data":"0fda4851cc8ea6e3dfebcaef1cb1bd1e81a4d543a16d90474c5ca10602c68d1c"} Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.204109 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.204128 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bc6f68bbd-rrpp7" event={"ID":"ba66d45b-42e9-4ea8-91dc-9925178eaa65","Type":"ContainerStarted","Data":"09f3068b4c2a8d2e5b9fd1002b05d431db2bb4b86a8982857a9e5ff8c2004501"} Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.204144 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.217708 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-755fb5c478-dt2rg" event={"ID":"5e665ce5-7f58-4b17-9ccf-3e641a34eae8","Type":"ContainerStarted","Data":"533744e0326a6fdfae6c6dc94ce6c24ed5819a5d29b6c4d534a599352bbc6d40"} Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.218653 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.221574 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj9fz" event={"ID":"34449cf3-049d-453b-ab88-ab40fdc25d6c","Type":"ContainerStarted","Data":"10e787fa4b25bc22cc6d7eb0721fc3f49823272ed21a586f41a31d2d0cb97efe"} Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.237304 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7bc6f68bbd-rrpp7" podStartSLOduration=3.237279263 podStartE2EDuration="3.237279263s" podCreationTimestamp="2026-01-21 15:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:37.234646621 +0000 UTC m=+1288.925352895" watchObservedRunningTime="2026-01-21 15:47:37.237279263 +0000 UTC m=+1288.927985527" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.267798 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-gj9fz" podStartSLOduration=4.35267541 podStartE2EDuration="50.267773074s" podCreationTimestamp="2026-01-21 15:46:47 +0000 UTC" firstStartedPulling="2026-01-21 15:46:49.177961465 +0000 UTC m=+1240.868667729" lastFinishedPulling="2026-01-21 15:47:35.093059139 +0000 UTC m=+1286.783765393" observedRunningTime="2026-01-21 15:47:37.261693789 +0000 UTC m=+1288.952400063" watchObservedRunningTime="2026-01-21 15:47:37.267773074 +0000 UTC m=+1288.958479338" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.287972 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-755fb5c478-dt2rg" podStartSLOduration=3.287953154 podStartE2EDuration="3.287953154s" podCreationTimestamp="2026-01-21 15:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:37.285260871 +0000 UTC m=+1288.975967145" watchObservedRunningTime="2026-01-21 15:47:37.287953154 +0000 UTC m=+1288.978659418" Jan 21 15:47:42 crc kubenswrapper[4739]: I0121 15:47:42.280568 4739 generic.go:334] "Generic (PLEG): container finished" podID="1f3d6499-baea-49df-8dab-393a192e0a6b" containerID="6ed86ff4645a0717cf253d999a5012187a4891a7826b6fe88297ab0c2a16d7ac" exitCode=0 Jan 21 15:47:42 crc kubenswrapper[4739]: I0121 15:47:42.280674 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jp27h" event={"ID":"1f3d6499-baea-49df-8dab-393a192e0a6b","Type":"ContainerDied","Data":"6ed86ff4645a0717cf253d999a5012187a4891a7826b6fe88297ab0c2a16d7ac"} Jan 21 15:47:43 crc kubenswrapper[4739]: I0121 15:47:43.290956 4739 generic.go:334] "Generic (PLEG): container finished" podID="a80f8b10-47b3-4590-95be-4468cea2f9c0" containerID="a1a4d3d9065a56e43fab1158e27671c9ee273058ec06016997bfb034518c2cec" exitCode=0 Jan 21 15:47:43 crc kubenswrapper[4739]: I0121 15:47:43.291042 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-96lt9" event={"ID":"a80f8b10-47b3-4590-95be-4468cea2f9c0","Type":"ContainerDied","Data":"a1a4d3d9065a56e43fab1158e27671c9ee273058ec06016997bfb034518c2cec"} Jan 21 15:47:45 crc kubenswrapper[4739]: I0121 15:47:45.940105 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-96lt9" Jan 21 15:47:45 crc kubenswrapper[4739]: I0121 15:47:45.947511 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jp27h" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.102581 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwgjt\" (UniqueName: \"kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt\") pod \"1f3d6499-baea-49df-8dab-393a192e0a6b\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.102933 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mklw\" (UniqueName: \"kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw\") pod \"a80f8b10-47b3-4590-95be-4468cea2f9c0\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.103025 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle\") pod \"1f3d6499-baea-49df-8dab-393a192e0a6b\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.103129 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data\") pod \"1f3d6499-baea-49df-8dab-393a192e0a6b\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.103226 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle\") pod \"a80f8b10-47b3-4590-95be-4468cea2f9c0\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.103340 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data\") pod \"1f3d6499-baea-49df-8dab-393a192e0a6b\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.103408 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data\") pod \"a80f8b10-47b3-4590-95be-4468cea2f9c0\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.114986 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f3d6499-baea-49df-8dab-393a192e0a6b" (UID: "1f3d6499-baea-49df-8dab-393a192e0a6b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.128130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt" (OuterVolumeSpecName: "kube-api-access-nwgjt") pod "1f3d6499-baea-49df-8dab-393a192e0a6b" (UID: "1f3d6499-baea-49df-8dab-393a192e0a6b"). InnerVolumeSpecName "kube-api-access-nwgjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.145976 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a80f8b10-47b3-4590-95be-4468cea2f9c0" (UID: "a80f8b10-47b3-4590-95be-4468cea2f9c0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.160142 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw" (OuterVolumeSpecName: "kube-api-access-2mklw") pod "a80f8b10-47b3-4590-95be-4468cea2f9c0" (UID: "a80f8b10-47b3-4590-95be-4468cea2f9c0"). InnerVolumeSpecName "kube-api-access-2mklw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.210622 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwgjt\" (UniqueName: \"kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.210659 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mklw\" (UniqueName: \"kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.210672 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.210683 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.229163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a80f8b10-47b3-4590-95be-4468cea2f9c0" (UID: "a80f8b10-47b3-4590-95be-4468cea2f9c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.235986 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f3d6499-baea-49df-8dab-393a192e0a6b" (UID: "1f3d6499-baea-49df-8dab-393a192e0a6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.262039 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data" (OuterVolumeSpecName: "config-data") pod "1f3d6499-baea-49df-8dab-393a192e0a6b" (UID: "1f3d6499-baea-49df-8dab-393a192e0a6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.311973 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.312009 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.312023 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.319083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jp27h" event={"ID":"1f3d6499-baea-49df-8dab-393a192e0a6b","Type":"ContainerDied","Data":"8d6af15680b028b7196d3337964dfd8f37e30a87e1e0f88af059752880f60d5c"} Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.319134 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6af15680b028b7196d3337964dfd8f37e30a87e1e0f88af059752880f60d5c" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.319200 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jp27h" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.321327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-96lt9" event={"ID":"a80f8b10-47b3-4590-95be-4468cea2f9c0","Type":"ContainerDied","Data":"c5196bf25d5857ba6a25f29fd0aef43035a6e6a1d7c067de217105c426d8d9cd"} Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.321368 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5196bf25d5857ba6a25f29fd0aef43035a6e6a1d7c067de217105c426d8d9cd" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.321420 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-96lt9" Jan 21 15:47:46 crc kubenswrapper[4739]: E0121 15:47:46.378306 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.233415 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b898c7bc9-wlcjc"] Jan 21 15:47:47 crc kubenswrapper[4739]: E0121 15:47:47.234145 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" containerName="glance-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.234164 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" containerName="glance-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: E0121 15:47:47.234210 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" containerName="barbican-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.234219 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" containerName="barbican-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.234388 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" containerName="glance-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.234417 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" containerName="barbican-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.248306 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.266016 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-64d4fbc96d-dlgxh"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.271297 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.273041 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bcvzr" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.273934 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.275121 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.283739 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.324354 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b898c7bc9-wlcjc"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343369 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bf76ca-61be-4cbe-b8ce-780502ae0205-logs\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343432 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data-custom\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ea7c1ca-928b-4218-b3da-df8050838259-logs\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343504 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mdfv\" (UniqueName: \"kubernetes.io/projected/4ea7c1ca-928b-4218-b3da-df8050838259-kube-api-access-2mdfv\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data-custom\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343560 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-combined-ca-bundle\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343593 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343624 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343665 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-combined-ca-bundle\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343693 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbz4g\" (UniqueName: \"kubernetes.io/projected/f3bf76ca-61be-4cbe-b8ce-780502ae0205-kube-api-access-rbz4g\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.349430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerStarted","Data":"21db862ee082d87cdf3d1346d54208682f47ae18b726d9b049948a36a98e9ef3"} Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.349598 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-central-agent" containerID="cri-o://e02d70af3a4e3e702b77dd7596ad641c6c72f26f066963eda08608155c031951" gracePeriod=30 Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.349888 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.349950 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="proxy-httpd" containerID="cri-o://21db862ee082d87cdf3d1346d54208682f47ae18b726d9b049948a36a98e9ef3" gracePeriod=30 Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.350028 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-notification-agent" containerID="cri-o://44b48ce759ea7bb448551711d1fca8cd6ba170fa42dfc430aedcbe8f84232bca" gracePeriod=30 Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.379348 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64d4fbc96d-dlgxh"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.444980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bf76ca-61be-4cbe-b8ce-780502ae0205-logs\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445033 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data-custom\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ea7c1ca-928b-4218-b3da-df8050838259-logs\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mdfv\" (UniqueName: \"kubernetes.io/projected/4ea7c1ca-928b-4218-b3da-df8050838259-kube-api-access-2mdfv\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445136 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data-custom\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-combined-ca-bundle\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445179 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445200 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445233 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-combined-ca-bundle\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbz4g\" (UniqueName: \"kubernetes.io/projected/f3bf76ca-61be-4cbe-b8ce-780502ae0205-kube-api-access-rbz4g\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.446492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bf76ca-61be-4cbe-b8ce-780502ae0205-logs\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.454932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ea7c1ca-928b-4218-b3da-df8050838259-logs\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.466077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data-custom\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.494499 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data-custom\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.525566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-combined-ca-bundle\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.533610 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.534591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mdfv\" (UniqueName: \"kubernetes.io/projected/4ea7c1ca-928b-4218-b3da-df8050838259-kube-api-access-2mdfv\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.535146 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.535914 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-combined-ca-bundle\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.556301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbz4g\" (UniqueName: \"kubernetes.io/projected/f3bf76ca-61be-4cbe-b8ce-780502ae0205-kube-api-access-rbz4g\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.580412 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.582089 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.583221 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.596229 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.596646 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.659680 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.659742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.659771 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.659839 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.659927 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l8d9\" (UniqueName: \"kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.748843 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.750430 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.759745 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761836 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761882 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l8d9\" (UniqueName: \"kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761912 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r22n\" (UniqueName: \"kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761942 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761969 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761998 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.762023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.762050 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.762071 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.765278 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.768380 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.768983 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.778050 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.786692 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.841353 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l8d9\" (UniqueName: \"kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.863187 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.863279 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r22n\" (UniqueName: \"kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.863308 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.863393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.863416 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.878980 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.889932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.904184 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.929894 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.935884 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.945980 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.984084 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.985794 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.002876 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r22n\" (UniqueName: \"kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.057919 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.105622 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.169153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.169228 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ck2\" (UniqueName: \"kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.169326 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.169385 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.169450 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.272226 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.272333 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.272409 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ck2\" (UniqueName: \"kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.272623 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.272673 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.273429 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.275797 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.279468 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.279957 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: W0121 15:47:48.315553 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3bf76ca_61be_4cbe_b8ce_780502ae0205.slice/crio-f7ce5b150e314041b5f7e83ba6a5fd048e26de2343ca6c88db5753226eb99280 WatchSource:0}: Error finding container f7ce5b150e314041b5f7e83ba6a5fd048e26de2343ca6c88db5753226eb99280: Status 404 returned error can't find the container with id f7ce5b150e314041b5f7e83ba6a5fd048e26de2343ca6c88db5753226eb99280 Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.350741 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ck2\" (UniqueName: \"kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.361101 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b898c7bc9-wlcjc"] Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.398289 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.453785 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" event={"ID":"f3bf76ca-61be-4cbe-b8ce-780502ae0205","Type":"ContainerStarted","Data":"f7ce5b150e314041b5f7e83ba6a5fd048e26de2343ca6c88db5753226eb99280"} Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.680141 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64d4fbc96d-dlgxh"] Jan 21 15:47:48 crc kubenswrapper[4739]: W0121 15:47:48.696189 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea7c1ca_928b_4218_b3da_df8050838259.slice/crio-1e8354b671001c2b098d091f40a414b1f8392fe940c28a2f66f9e399e649e08a WatchSource:0}: Error finding container 1e8354b671001c2b098d091f40a414b1f8392fe940c28a2f66f9e399e649e08a: Status 404 returned error can't find the container with id 1e8354b671001c2b098d091f40a414b1f8392fe940c28a2f66f9e399e649e08a Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.912801 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.921723 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.209870 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.474031 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerStarted","Data":"218fea87f37935d55ebbdf80f88caad3f2d151586bd75d9d510ae19122a9cad3"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.474089 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerStarted","Data":"5a9648a36b5a7cda7cc2a5615a5ea2242f6d1558a32a504899b7d452f960802b"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.476929 4739 generic.go:334] "Generic (PLEG): container finished" podID="c3db54ff-0694-44eb-949d-1d6660db7f04" containerID="72cdc28f8e4120551e894aad2230b6894d20ee95f8c90347c08907af72d61bdd" exitCode=0 Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.476997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" event={"ID":"c3db54ff-0694-44eb-949d-1d6660db7f04","Type":"ContainerDied","Data":"72cdc28f8e4120551e894aad2230b6894d20ee95f8c90347c08907af72d61bdd"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.477024 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" event={"ID":"c3db54ff-0694-44eb-949d-1d6660db7f04","Type":"ContainerStarted","Data":"0933acba2e4b7f54eceec413c01f85001a8af5cfb0dc791f6a7217faba40bc93"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.479649 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" event={"ID":"4ea7c1ca-928b-4218-b3da-df8050838259","Type":"ContainerStarted","Data":"1e8354b671001c2b098d091f40a414b1f8392fe940c28a2f66f9e399e649e08a"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.483113 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" event={"ID":"56d92e40-3e85-4646-9a40-bab0619a7920","Type":"ContainerStarted","Data":"14a91ba32f00981551a07b14eb545cc84eebbadef30a6ef237314c70cbc39eaf"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.487264 4739 generic.go:334] "Generic (PLEG): container finished" podID="7284d869-b8de-4465-a987-4c9606dcdc74" containerID="21db862ee082d87cdf3d1346d54208682f47ae18b726d9b049948a36a98e9ef3" exitCode=0 Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.487293 4739 generic.go:334] "Generic (PLEG): container finished" podID="7284d869-b8de-4465-a987-4c9606dcdc74" containerID="e02d70af3a4e3e702b77dd7596ad641c6c72f26f066963eda08608155c031951" exitCode=0 Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.487310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerDied","Data":"21db862ee082d87cdf3d1346d54208682f47ae18b726d9b049948a36a98e9ef3"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.487330 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerDied","Data":"e02d70af3a4e3e702b77dd7596ad641c6c72f26f066963eda08608155c031951"} Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.351427 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.427614 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb\") pod \"c3db54ff-0694-44eb-949d-1d6660db7f04\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.427692 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc\") pod \"c3db54ff-0694-44eb-949d-1d6660db7f04\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.427728 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l8d9\" (UniqueName: \"kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9\") pod \"c3db54ff-0694-44eb-949d-1d6660db7f04\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.427796 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb\") pod \"c3db54ff-0694-44eb-949d-1d6660db7f04\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.427884 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config\") pod \"c3db54ff-0694-44eb-949d-1d6660db7f04\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.446917 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9" (OuterVolumeSpecName: "kube-api-access-7l8d9") pod "c3db54ff-0694-44eb-949d-1d6660db7f04" (UID: "c3db54ff-0694-44eb-949d-1d6660db7f04"). InnerVolumeSpecName "kube-api-access-7l8d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.462179 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3db54ff-0694-44eb-949d-1d6660db7f04" (UID: "c3db54ff-0694-44eb-949d-1d6660db7f04"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.466921 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3db54ff-0694-44eb-949d-1d6660db7f04" (UID: "c3db54ff-0694-44eb-949d-1d6660db7f04"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.467862 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3db54ff-0694-44eb-949d-1d6660db7f04" (UID: "c3db54ff-0694-44eb-949d-1d6660db7f04"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.471228 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config" (OuterVolumeSpecName: "config") pod "c3db54ff-0694-44eb-949d-1d6660db7f04" (UID: "c3db54ff-0694-44eb-949d-1d6660db7f04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.499278 4739 generic.go:334] "Generic (PLEG): container finished" podID="34449cf3-049d-453b-ab88-ab40fdc25d6c" containerID="10e787fa4b25bc22cc6d7eb0721fc3f49823272ed21a586f41a31d2d0cb97efe" exitCode=0 Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.499357 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj9fz" event={"ID":"34449cf3-049d-453b-ab88-ab40fdc25d6c","Type":"ContainerDied","Data":"10e787fa4b25bc22cc6d7eb0721fc3f49823272ed21a586f41a31d2d0cb97efe"} Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.510394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerStarted","Data":"bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b"} Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.511263 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.511304 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.515062 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" event={"ID":"c3db54ff-0694-44eb-949d-1d6660db7f04","Type":"ContainerDied","Data":"0933acba2e4b7f54eceec413c01f85001a8af5cfb0dc791f6a7217faba40bc93"} Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.515120 4739 scope.go:117] "RemoveContainer" containerID="72cdc28f8e4120551e894aad2230b6894d20ee95f8c90347c08907af72d61bdd" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.518118 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.529961 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.529993 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.530004 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l8d9\" (UniqueName: \"kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.530015 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.530026 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.548437 4739 generic.go:334] "Generic (PLEG): container finished" podID="56d92e40-3e85-4646-9a40-bab0619a7920" containerID="e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925" exitCode=0 Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.548482 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" event={"ID":"56d92e40-3e85-4646-9a40-bab0619a7920","Type":"ContainerDied","Data":"e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925"} Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.555804 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-798bc7f66d-zdjvx" podStartSLOduration=3.555781073 podStartE2EDuration="3.555781073s" podCreationTimestamp="2026-01-21 15:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:50.547581299 +0000 UTC m=+1302.238287613" watchObservedRunningTime="2026-01-21 15:47:50.555781073 +0000 UTC m=+1302.246487337" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.639739 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.650964 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.818433 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3db54ff-0694-44eb-949d-1d6660db7f04" path="/var/lib/kubelet/pods/c3db54ff-0694-44eb-949d-1d6660db7f04/volumes" Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.582544 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" event={"ID":"4ea7c1ca-928b-4218-b3da-df8050838259","Type":"ContainerStarted","Data":"6f0bb5fb741f3fb8a8666ba4fe400119ef088edf5ec6ed2840a1bd9813403d1a"} Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.583018 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" event={"ID":"4ea7c1ca-928b-4218-b3da-df8050838259","Type":"ContainerStarted","Data":"fe446070b5109da0765d3b2b89b114309b05f7df8c12aaeeffd47aebd824cebe"} Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.586989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" event={"ID":"56d92e40-3e85-4646-9a40-bab0619a7920","Type":"ContainerStarted","Data":"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd"} Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.587100 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.589539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" event={"ID":"f3bf76ca-61be-4cbe-b8ce-780502ae0205","Type":"ContainerStarted","Data":"f3091b8df66079b609f342143d891179409c370c4e49ce4e16cf912d126e14a1"} Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.589581 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" event={"ID":"f3bf76ca-61be-4cbe-b8ce-780502ae0205","Type":"ContainerStarted","Data":"1080b909b905ab262f33632477a5a382df0c85b13b10bb86668843c935a71be0"} Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.610347 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" podStartSLOduration=2.357903302 podStartE2EDuration="4.610329436s" podCreationTimestamp="2026-01-21 15:47:47 +0000 UTC" firstStartedPulling="2026-01-21 15:47:48.700868368 +0000 UTC m=+1300.391574632" lastFinishedPulling="2026-01-21 15:47:50.953294502 +0000 UTC m=+1302.644000766" observedRunningTime="2026-01-21 15:47:51.606571645 +0000 UTC m=+1303.297277919" watchObservedRunningTime="2026-01-21 15:47:51.610329436 +0000 UTC m=+1303.301035690" Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.669968 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" podStartSLOduration=4.669943572 podStartE2EDuration="4.669943572s" podCreationTimestamp="2026-01-21 15:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:51.666351035 +0000 UTC m=+1303.357057299" watchObservedRunningTime="2026-01-21 15:47:51.669943572 +0000 UTC m=+1303.360649836" Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.695951 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" podStartSLOduration=2.099647421 podStartE2EDuration="4.695929941s" podCreationTimestamp="2026-01-21 15:47:47 +0000 UTC" firstStartedPulling="2026-01-21 15:47:48.356899239 +0000 UTC m=+1300.047605503" lastFinishedPulling="2026-01-21 15:47:50.953181759 +0000 UTC m=+1302.643888023" observedRunningTime="2026-01-21 15:47:51.693356441 +0000 UTC m=+1303.384062725" watchObservedRunningTime="2026-01-21 15:47:51.695929941 +0000 UTC m=+1303.386636205" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.123126 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277328 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277451 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277491 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277572 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277618 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277649 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2jh4\" (UniqueName: \"kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277664 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.278060 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.283560 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4" (OuterVolumeSpecName: "kube-api-access-g2jh4") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "kube-api-access-g2jh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.286979 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.296992 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts" (OuterVolumeSpecName: "scripts") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.315006 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.336959 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data" (OuterVolumeSpecName: "config-data") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.379970 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.380617 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2jh4\" (UniqueName: \"kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.380738 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.380812 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.380980 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.599346 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.599345 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj9fz" event={"ID":"34449cf3-049d-453b-ab88-ab40fdc25d6c","Type":"ContainerDied","Data":"bd0a019a37919c8b2d755da31b38b011b3ac9cfa6f01caccc84ca0777470260c"} Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.599404 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd0a019a37919c8b2d755da31b38b011b3ac9cfa6f01caccc84ca0777470260c" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.875930 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:47:52 crc kubenswrapper[4739]: E0121 15:47:52.876360 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3db54ff-0694-44eb-949d-1d6660db7f04" containerName="init" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.876383 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3db54ff-0694-44eb-949d-1d6660db7f04" containerName="init" Jan 21 15:47:52 crc kubenswrapper[4739]: E0121 15:47:52.876398 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" containerName="cinder-db-sync" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.876409 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" containerName="cinder-db-sync" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.876620 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3db54ff-0694-44eb-949d-1d6660db7f04" containerName="init" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.876658 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" containerName="cinder-db-sync" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.877703 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.881712 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4sncj" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.882184 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.884596 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.887296 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.900130 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.923435 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.992867 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.992968 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.992985 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.993009 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.993131 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktkd2\" (UniqueName: \"kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.993243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.996258 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.998627 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.093331 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094402 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094443 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjgb9\" (UniqueName: \"kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094476 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094519 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094545 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktkd2\" (UniqueName: \"kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094569 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094600 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094626 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094646 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.095255 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.110850 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.117429 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.134153 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.135423 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.150614 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktkd2\" (UniqueName: \"kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.196074 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.196140 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjgb9\" (UniqueName: \"kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.196192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.196228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.196260 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.197330 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.201656 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.201686 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.201831 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.225790 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjgb9\" (UniqueName: \"kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.232227 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.244889 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.246399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.253176 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.262168 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.322289 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403110 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403184 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxj4\" (UniqueName: \"kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403296 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403341 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403392 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403447 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403479 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505052 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505456 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505575 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505605 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxj4\" (UniqueName: \"kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505709 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505768 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505811 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.511749 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.512239 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.517430 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.517997 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.523405 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.544920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxj4\" (UniqueName: \"kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.625242 4739 generic.go:334] "Generic (PLEG): container finished" podID="7284d869-b8de-4465-a987-4c9606dcdc74" containerID="44b48ce759ea7bb448551711d1fca8cd6ba170fa42dfc430aedcbe8f84232bca" exitCode=0 Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.625529 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="dnsmasq-dns" containerID="cri-o://1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd" gracePeriod=10 Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.625629 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerDied","Data":"44b48ce759ea7bb448551711d1fca8cd6ba170fa42dfc430aedcbe8f84232bca"} Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.640416 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.663978 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.813737 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.813843 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.813876 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.813940 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.814014 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.814073 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcdrs\" (UniqueName: \"kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.814130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.819165 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.821391 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.823508 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts" (OuterVolumeSpecName: "scripts") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.827203 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs" (OuterVolumeSpecName: "kube-api-access-hcdrs") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "kube-api-access-hcdrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.829639 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.919092 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.919519 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.919536 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.919548 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.919560 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcdrs\" (UniqueName: \"kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.924106 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.931251 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.936549 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.014264 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data" (OuterVolumeSpecName: "config-data") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.021503 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.021539 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.053916 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.245657 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.407245 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.531018 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc\") pod \"56d92e40-3e85-4646-9a40-bab0619a7920\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.531126 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb\") pod \"56d92e40-3e85-4646-9a40-bab0619a7920\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.531204 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config\") pod \"56d92e40-3e85-4646-9a40-bab0619a7920\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.531754 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb\") pod \"56d92e40-3e85-4646-9a40-bab0619a7920\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.532031 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8ck2\" (UniqueName: \"kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2\") pod \"56d92e40-3e85-4646-9a40-bab0619a7920\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.561056 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2" (OuterVolumeSpecName: "kube-api-access-b8ck2") pod "56d92e40-3e85-4646-9a40-bab0619a7920" (UID: "56d92e40-3e85-4646-9a40-bab0619a7920"). InnerVolumeSpecName "kube-api-access-b8ck2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.638319 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8ck2\" (UniqueName: \"kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.647645 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56d92e40-3e85-4646-9a40-bab0619a7920" (UID: "56d92e40-3e85-4646-9a40-bab0619a7920"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.656778 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerStarted","Data":"a33c22381a2431a5d5a985f009f84a51a3c4e02d87387c395648e543219c46c5"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.657785 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config" (OuterVolumeSpecName: "config") pod "56d92e40-3e85-4646-9a40-bab0619a7920" (UID: "56d92e40-3e85-4646-9a40-bab0619a7920"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.657919 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56d92e40-3e85-4646-9a40-bab0619a7920" (UID: "56d92e40-3e85-4646-9a40-bab0619a7920"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.661899 4739 generic.go:334] "Generic (PLEG): container finished" podID="56d92e40-3e85-4646-9a40-bab0619a7920" containerID="1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd" exitCode=0 Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.661991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" event={"ID":"56d92e40-3e85-4646-9a40-bab0619a7920","Type":"ContainerDied","Data":"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.662022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" event={"ID":"56d92e40-3e85-4646-9a40-bab0619a7920","Type":"ContainerDied","Data":"14a91ba32f00981551a07b14eb545cc84eebbadef30a6ef237314c70cbc39eaf"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.662038 4739 scope.go:117] "RemoveContainer" containerID="1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.662227 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.676905 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56d92e40-3e85-4646-9a40-bab0619a7920" (UID: "56d92e40-3e85-4646-9a40-bab0619a7920"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.678440 4739 generic.go:334] "Generic (PLEG): container finished" podID="63913da1-1f11-4850-9e92-a75afe2013f7" containerID="52cf3fb66c6197c3e5dc6c64add6ba1ef29236ed9f6b4f4d76dda982e2abc1bb" exitCode=0 Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.678548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" event={"ID":"63913da1-1f11-4850-9e92-a75afe2013f7","Type":"ContainerDied","Data":"52cf3fb66c6197c3e5dc6c64add6ba1ef29236ed9f6b4f4d76dda982e2abc1bb"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.678583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" event={"ID":"63913da1-1f11-4850-9e92-a75afe2013f7","Type":"ContainerStarted","Data":"1b39dcf58e2eff40de38a5ef2feefae8fb7d5ed95e0566e20b66ac63802c2ca3"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.711314 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.714133 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerDied","Data":"7211b1d26178cb64e4faaf584f0788cadfa23e148dc68767018276c936da671e"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.721959 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerStarted","Data":"d369d4eb1357f599b17e2e6a2c414771f3c1428ce9e15341f9792ffbef6b24fa"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.743349 4739 scope.go:117] "RemoveContainer" containerID="e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.744383 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.744425 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.744437 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.744447 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.908567 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.937550 4739 scope.go:117] "RemoveContainer" containerID="1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.938288 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd\": container with ID starting with 1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd not found: ID does not exist" containerID="1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.938329 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd"} err="failed to get container status \"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd\": rpc error: code = NotFound desc = could not find container \"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd\": container with ID starting with 1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd not found: ID does not exist" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.938356 4739 scope.go:117] "RemoveContainer" containerID="e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.938835 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925\": container with ID starting with e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925 not found: ID does not exist" containerID="e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.938862 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925"} err="failed to get container status \"e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925\": rpc error: code = NotFound desc = could not find container \"e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925\": container with ID starting with e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925 not found: ID does not exist" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.938880 4739 scope.go:117] "RemoveContainer" containerID="21db862ee082d87cdf3d1346d54208682f47ae18b726d9b049948a36a98e9ef3" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.942071 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.987956 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.988444 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-central-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988461 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-central-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.988478 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="dnsmasq-dns" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988486 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="dnsmasq-dns" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.988506 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="init" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988542 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="init" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.988553 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="proxy-httpd" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988561 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="proxy-httpd" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.988583 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-notification-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988590 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-notification-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988801 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="dnsmasq-dns" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988845 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-notification-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988858 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-central-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988872 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="proxy-httpd" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.993522 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.003010 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.003233 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.018990 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.072723 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.072784 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073041 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073114 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwpgd\" (UniqueName: \"kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073195 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073285 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073379 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073519 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.098521 4739 scope.go:117] "RemoveContainer" containerID="44b48ce759ea7bb448551711d1fca8cd6ba170fa42dfc430aedcbe8f84232bca" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.114590 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.178967 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179032 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwpgd\" (UniqueName: \"kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179085 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179170 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179227 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179261 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179736 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.186435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.187154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.193168 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.202645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.202685 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.207080 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwpgd\" (UniqueName: \"kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.242088 4739 scope.go:117] "RemoveContainer" containerID="e02d70af3a4e3e702b77dd7596ad641c6c72f26f066963eda08608155c031951" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.421148 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.787573 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" event={"ID":"63913da1-1f11-4850-9e92-a75afe2013f7","Type":"ContainerStarted","Data":"fba44da8a7e7cf66299ef445796c138b334f24d352689bbbac06140c006da565"} Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.789900 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.814992 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" podStartSLOduration=3.814976402 podStartE2EDuration="3.814976402s" podCreationTimestamp="2026-01-21 15:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:55.812621927 +0000 UTC m=+1307.503328191" watchObservedRunningTime="2026-01-21 15:47:55.814976402 +0000 UTC m=+1307.505682666" Jan 21 15:47:56 crc kubenswrapper[4739]: I0121 15:47:56.078858 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:47:56 crc kubenswrapper[4739]: W0121 15:47:56.095033 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab3cb9e_14c1_493f_b182_8f8d43eec8cf.slice/crio-8178637c93490cb1b6b2251656fd24d36a3d98273536c99ade77ced7e9e0266e WatchSource:0}: Error finding container 8178637c93490cb1b6b2251656fd24d36a3d98273536c99ade77ced7e9e0266e: Status 404 returned error can't find the container with id 8178637c93490cb1b6b2251656fd24d36a3d98273536c99ade77ced7e9e0266e Jan 21 15:47:56 crc kubenswrapper[4739]: I0121 15:47:56.795766 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" path="/var/lib/kubelet/pods/56d92e40-3e85-4646-9a40-bab0619a7920/volumes" Jan 21 15:47:56 crc kubenswrapper[4739]: I0121 15:47:56.797245 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" path="/var/lib/kubelet/pods/7284d869-b8de-4465-a987-4c9606dcdc74/volumes" Jan 21 15:47:56 crc kubenswrapper[4739]: I0121 15:47:56.859322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerStarted","Data":"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9"} Jan 21 15:47:56 crc kubenswrapper[4739]: I0121 15:47:56.865060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerStarted","Data":"8178637c93490cb1b6b2251656fd24d36a3d98273536c99ade77ced7e9e0266e"} Jan 21 15:47:57 crc kubenswrapper[4739]: I0121 15:47:57.887037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerStarted","Data":"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26"} Jan 21 15:47:57 crc kubenswrapper[4739]: I0121 15:47:57.887619 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 15:47:57 crc kubenswrapper[4739]: I0121 15:47:57.891755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerStarted","Data":"95a4f2c6c1ae76a7e35f872c05466e5c7314820964e8c802fe85e0822802613f"} Jan 21 15:47:57 crc kubenswrapper[4739]: I0121 15:47:57.922600 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.922578807 podStartE2EDuration="4.922578807s" podCreationTimestamp="2026-01-21 15:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:57.911224868 +0000 UTC m=+1309.601931152" watchObservedRunningTime="2026-01-21 15:47:57.922578807 +0000 UTC m=+1309.613285091" Jan 21 15:47:58 crc kubenswrapper[4739]: I0121 15:47:58.347865 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.066669 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c6c95c866-nplmh"] Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.068672 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.072550 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.073008 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.077984 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c6c95c866-nplmh"] Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.152943 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.153287 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.221742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-internal-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.221799 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data-custom\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.221869 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgkv\" (UniqueName: \"kubernetes.io/projected/08457213-f4e0-4334-a1b0-a569bb5077ba-kube-api-access-7hgkv\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.221901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-combined-ca-bundle\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.221935 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.222009 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-public-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.222051 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08457213-f4e0-4334-a1b0-a569bb5077ba-logs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.324082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hgkv\" (UniqueName: \"kubernetes.io/projected/08457213-f4e0-4334-a1b0-a569bb5077ba-kube-api-access-7hgkv\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.324777 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-combined-ca-bundle\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.324849 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.324901 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-public-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.324945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08457213-f4e0-4334-a1b0-a569bb5077ba-logs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.325099 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-internal-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.325137 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data-custom\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.327031 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08457213-f4e0-4334-a1b0-a569bb5077ba-logs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.333250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-internal-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.335270 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-combined-ca-bundle\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.341763 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data-custom\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.349370 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-public-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.353262 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.371876 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hgkv\" (UniqueName: \"kubernetes.io/projected/08457213-f4e0-4334-a1b0-a569bb5077ba-kube-api-access-7hgkv\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.401734 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.931553 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerStarted","Data":"d9032c575c2477c968dccbbf4e3af7feeec3fb419544675f1c5e79c829f032bb"} Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.937354 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api-log" containerID="cri-o://84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" gracePeriod=30 Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.937856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerStarted","Data":"dd0646ed77e930080acfbb6f8657f0770afbb11b2245f30e3e6a65bd3587ff90"} Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.937946 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api" containerID="cri-o://9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" gracePeriod=30 Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.979649 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c6c95c866-nplmh"] Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.994138 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.967745645 podStartE2EDuration="7.99411936s" podCreationTimestamp="2026-01-21 15:47:52 +0000 UTC" firstStartedPulling="2026-01-21 15:47:53.936341778 +0000 UTC m=+1305.627048042" lastFinishedPulling="2026-01-21 15:47:54.962715493 +0000 UTC m=+1306.653421757" observedRunningTime="2026-01-21 15:47:59.973302622 +0000 UTC m=+1311.664008896" watchObservedRunningTime="2026-01-21 15:47:59.99411936 +0000 UTC m=+1311.684825624" Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.942621 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962607 4739 generic.go:334] "Generic (PLEG): container finished" podID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerID="9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" exitCode=0 Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962644 4739 generic.go:334] "Generic (PLEG): container finished" podID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerID="84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" exitCode=143 Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962712 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerDied","Data":"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26"} Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962745 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerDied","Data":"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9"} Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962757 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerDied","Data":"d369d4eb1357f599b17e2e6a2c414771f3c1428ce9e15341f9792ffbef6b24fa"} Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962775 4739 scope.go:117] "RemoveContainer" containerID="9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.963000 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.979681 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6c95c866-nplmh" event={"ID":"08457213-f4e0-4334-a1b0-a569bb5077ba","Type":"ContainerStarted","Data":"f0b6dcd5a5b6dceed75d0355faed78983796d7275b0de393fcda71895757aa77"} Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.979723 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6c95c866-nplmh" event={"ID":"08457213-f4e0-4334-a1b0-a569bb5077ba","Type":"ContainerStarted","Data":"e977b6008168b767373a0a7797d5cb19967574b6aaa598c733cb8ee0010cea2b"} Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.996766 4739 scope.go:117] "RemoveContainer" containerID="84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062249 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062378 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062474 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062509 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062548 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062633 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzxj4\" (UniqueName: \"kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.067707 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.068389 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.069285 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs" (OuterVolumeSpecName: "logs") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.075487 4739 scope.go:117] "RemoveContainer" containerID="9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" Jan 21 15:48:01 crc kubenswrapper[4739]: E0121 15:48:01.077174 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26\": container with ID starting with 9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26 not found: ID does not exist" containerID="9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.077221 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26"} err="failed to get container status \"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26\": rpc error: code = NotFound desc = could not find container \"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26\": container with ID starting with 9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26 not found: ID does not exist" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.077253 4739 scope.go:117] "RemoveContainer" containerID="84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" Jan 21 15:48:01 crc kubenswrapper[4739]: E0121 15:48:01.079681 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9\": container with ID starting with 84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9 not found: ID does not exist" containerID="84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.079723 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9"} err="failed to get container status \"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9\": rpc error: code = NotFound desc = could not find container \"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9\": container with ID starting with 84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9 not found: ID does not exist" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.079744 4739 scope.go:117] "RemoveContainer" containerID="9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.089194 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4" (OuterVolumeSpecName: "kube-api-access-mzxj4") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "kube-api-access-mzxj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.089375 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26"} err="failed to get container status \"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26\": rpc error: code = NotFound desc = could not find container \"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26\": container with ID starting with 9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26 not found: ID does not exist" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.089434 4739 scope.go:117] "RemoveContainer" containerID="84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.090991 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9"} err="failed to get container status \"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9\": rpc error: code = NotFound desc = could not find container \"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9\": container with ID starting with 84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9 not found: ID does not exist" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.096009 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts" (OuterVolumeSpecName: "scripts") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.134932 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165867 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165914 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165927 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165939 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzxj4\" (UniqueName: \"kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165951 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165961 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.189209 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data" (OuterVolumeSpecName: "config-data") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.270183 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.311740 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.318649 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.349314 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:48:01 crc kubenswrapper[4739]: E0121 15:48:01.349765 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api-log" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.349790 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api-log" Jan 21 15:48:01 crc kubenswrapper[4739]: E0121 15:48:01.349870 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.349882 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.350052 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.350087 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api-log" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.351149 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.364521 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.364792 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.365012 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.400227 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.477631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data-custom\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.477917 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/340cac45-4a1b-404b-abf0-24e2eb31980b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478005 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478172 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ghqk\" (UniqueName: \"kubernetes.io/projected/340cac45-4a1b-404b-abf0-24e2eb31980b-kube-api-access-7ghqk\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-scripts\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478270 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478359 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478454 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478668 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340cac45-4a1b-404b-abf0-24e2eb31980b-logs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340cac45-4a1b-404b-abf0-24e2eb31980b-logs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data-custom\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580126 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/340cac45-4a1b-404b-abf0-24e2eb31980b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580146 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580171 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ghqk\" (UniqueName: \"kubernetes.io/projected/340cac45-4a1b-404b-abf0-24e2eb31980b-kube-api-access-7ghqk\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580186 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-scripts\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580202 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580263 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.581402 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340cac45-4a1b-404b-abf0-24e2eb31980b-logs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.581416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/340cac45-4a1b-404b-abf0-24e2eb31980b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.587796 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data-custom\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.588161 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.588763 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.589083 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.593262 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.610382 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ghqk\" (UniqueName: \"kubernetes.io/projected/340cac45-4a1b-404b-abf0-24e2eb31980b-kube-api-access-7ghqk\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.614257 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-scripts\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.735013 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.002984 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerStarted","Data":"6b96f689ee9e12a088809ec4fe36a34032926af662682529b60ab93609df0595"} Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.019695 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6c95c866-nplmh" event={"ID":"08457213-f4e0-4334-a1b0-a569bb5077ba","Type":"ContainerStarted","Data":"d9d13fb3a888b183e27fe291f1cdc7c5ddccb0d70a9e5a842787062e9182e39c"} Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.020048 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.020075 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.064111 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c6c95c866-nplmh" podStartSLOduration=3.06408711 podStartE2EDuration="3.06408711s" podCreationTimestamp="2026-01-21 15:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:02.058186719 +0000 UTC m=+1313.748892983" watchObservedRunningTime="2026-01-21 15:48:02.06408711 +0000 UTC m=+1313.754793374" Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.300373 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:48:02 crc kubenswrapper[4739]: W0121 15:48:02.478158 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod340cac45_4a1b_404b_abf0_24e2eb31980b.slice/crio-50eaf8d1a241904f67772b1f63cd82e0b0c2d8e6330d45bce3967a5db9149e12 WatchSource:0}: Error finding container 50eaf8d1a241904f67772b1f63cd82e0b0c2d8e6330d45bce3967a5db9149e12: Status 404 returned error can't find the container with id 50eaf8d1a241904f67772b1f63cd82e0b0c2d8e6330d45bce3967a5db9149e12 Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.767276 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.812879 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" path="/var/lib/kubelet/pods/a685d6b8-0db9-4de5-a4e1-3c961a037222/volumes" Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.044075 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerStarted","Data":"4447d0ddbe5f72d785db75ba20f6aef58695008ba60d9aafe826c3486bef96b0"} Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.050766 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"340cac45-4a1b-404b-abf0-24e2eb31980b","Type":"ContainerStarted","Data":"50eaf8d1a241904f67772b1f63cd82e0b0c2d8e6330d45bce3967a5db9149e12"} Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.151297 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.234078 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.324988 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.393478 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.393750 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="dnsmasq-dns" containerID="cri-o://e4a303fe13e88a08cc4fb148c52a17956e03f955dee54aa65dda00a77f041d95" gracePeriod=10 Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.059587 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerID="e4a303fe13e88a08cc4fb148c52a17956e03f955dee54aa65dda00a77f041d95" exitCode=0 Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.059771 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" event={"ID":"2a622ecf-b73e-4104-8ab5-c60fea198474","Type":"ContainerDied","Data":"e4a303fe13e88a08cc4fb148c52a17956e03f955dee54aa65dda00a77f041d95"} Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.061339 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"340cac45-4a1b-404b-abf0-24e2eb31980b","Type":"ContainerStarted","Data":"d186510caa0b09772ceaffa7c52516409e81c5c62d2594746c3bd757dd216251"} Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.238059 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.238396 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.492127 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.857679 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.894347 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb\") pod \"2a622ecf-b73e-4104-8ab5-c60fea198474\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.894467 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb\") pod \"2a622ecf-b73e-4104-8ab5-c60fea198474\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.894526 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config\") pod \"2a622ecf-b73e-4104-8ab5-c60fea198474\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.894553 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqj2v\" (UniqueName: \"kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v\") pod \"2a622ecf-b73e-4104-8ab5-c60fea198474\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.894779 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc\") pod \"2a622ecf-b73e-4104-8ab5-c60fea198474\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.947999 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v" (OuterVolumeSpecName: "kube-api-access-vqj2v") pod "2a622ecf-b73e-4104-8ab5-c60fea198474" (UID: "2a622ecf-b73e-4104-8ab5-c60fea198474"). InnerVolumeSpecName "kube-api-access-vqj2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.996925 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqj2v\" (UniqueName: \"kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.000465 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a622ecf-b73e-4104-8ab5-c60fea198474" (UID: "2a622ecf-b73e-4104-8ab5-c60fea198474"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.025082 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2a622ecf-b73e-4104-8ab5-c60fea198474" (UID: "2a622ecf-b73e-4104-8ab5-c60fea198474"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.040217 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config" (OuterVolumeSpecName: "config") pod "2a622ecf-b73e-4104-8ab5-c60fea198474" (UID: "2a622ecf-b73e-4104-8ab5-c60fea198474"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.046796 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2a622ecf-b73e-4104-8ab5-c60fea198474" (UID: "2a622ecf-b73e-4104-8ab5-c60fea198474"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.098967 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.099008 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.099018 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.099027 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.116537 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" event={"ID":"2a622ecf-b73e-4104-8ab5-c60fea198474","Type":"ContainerDied","Data":"2944760882b05c708f270896329b53b5ff2a4da1eec8a53b5962df9cab5a1dd9"} Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.116605 4739 scope.go:117] "RemoveContainer" containerID="e4a303fe13e88a08cc4fb148c52a17956e03f955dee54aa65dda00a77f041d95" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.116774 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.203402 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.222611 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.222663 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.233157 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.277011 4739 scope.go:117] "RemoveContainer" containerID="5c3a9f6b8ee8e424c97637acf52e19d40081ea480347a9c867edcc32fb595b79" Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.128077 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerStarted","Data":"85f16bfba68487291f8ff8231d72fd07ea67fe123fcbd148bbd91c4d05795294"} Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.128698 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.132074 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"340cac45-4a1b-404b-abf0-24e2eb31980b","Type":"ContainerStarted","Data":"fd822509eeb9641ca6ffcb3bc55865752da5b68a55aa93e23bb28c85f2439abc"} Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.132303 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.157542 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7267112190000002 podStartE2EDuration="12.157519361s" podCreationTimestamp="2026-01-21 15:47:54 +0000 UTC" firstStartedPulling="2026-01-21 15:47:56.104133565 +0000 UTC m=+1307.794839829" lastFinishedPulling="2026-01-21 15:48:05.534941707 +0000 UTC m=+1317.225647971" observedRunningTime="2026-01-21 15:48:06.151538709 +0000 UTC m=+1317.842244973" watchObservedRunningTime="2026-01-21 15:48:06.157519361 +0000 UTC m=+1317.848225625" Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.794723 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" path="/var/lib/kubelet/pods/2a622ecf-b73e-4104-8ab5-c60fea198474/volumes" Jan 21 15:48:07 crc kubenswrapper[4739]: I0121 15:48:07.057549 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:48:07 crc kubenswrapper[4739]: I0121 15:48:07.081206 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.081179746 podStartE2EDuration="6.081179746s" podCreationTimestamp="2026-01-21 15:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:06.207408252 +0000 UTC m=+1317.898114516" watchObservedRunningTime="2026-01-21 15:48:07.081179746 +0000 UTC m=+1318.771886010" Jan 21 15:48:08 crc kubenswrapper[4739]: I0121 15:48:08.337744 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 15:48:08 crc kubenswrapper[4739]: I0121 15:48:08.348408 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:48:08 crc kubenswrapper[4739]: I0121 15:48:08.407028 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:08 crc kubenswrapper[4739]: I0121 15:48:08.608924 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:48:09 crc kubenswrapper[4739]: I0121 15:48:09.166155 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="cinder-scheduler" containerID="cri-o://95a4f2c6c1ae76a7e35f872c05466e5c7314820964e8c802fe85e0822802613f" gracePeriod=30 Jan 21 15:48:09 crc kubenswrapper[4739]: I0121 15:48:09.166972 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="probe" containerID="cri-o://d9032c575c2477c968dccbbf4e3af7feeec3fb419544675f1c5e79c829f032bb" gracePeriod=30 Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.329365 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.816233 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 15:48:10 crc kubenswrapper[4739]: E0121 15:48:10.816691 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="dnsmasq-dns" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.816712 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="dnsmasq-dns" Jan 21 15:48:10 crc kubenswrapper[4739]: E0121 15:48:10.816728 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="init" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.816736 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="init" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.816997 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="dnsmasq-dns" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.817746 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.822292 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.822896 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-49v78" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.823040 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.862512 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.889840 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.889956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.890038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.890117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fdj\" (UniqueName: \"kubernetes.io/projected/8f733769-d3f8-4ced-be3b-cbb84339dac5-kube-api-access-62fdj\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.992379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.992487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fdj\" (UniqueName: \"kubernetes.io/projected/8f733769-d3f8-4ced-be3b-cbb84339dac5-kube-api-access-62fdj\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.994225 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.006387 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.006502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.007695 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.010848 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fdj\" (UniqueName: \"kubernetes.io/projected/8f733769-d3f8-4ced-be3b-cbb84339dac5-kube-api-access-62fdj\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.012568 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.135371 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.201991 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5e00032-f7f2-4119-9959-855f772bde19" containerID="d9032c575c2477c968dccbbf4e3af7feeec3fb419544675f1c5e79c829f032bb" exitCode=0 Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.202216 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5e00032-f7f2-4119-9959-855f772bde19" containerID="95a4f2c6c1ae76a7e35f872c05466e5c7314820964e8c802fe85e0822802613f" exitCode=0 Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.202301 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerDied","Data":"d9032c575c2477c968dccbbf4e3af7feeec3fb419544675f1c5e79c829f032bb"} Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.202379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerDied","Data":"95a4f2c6c1ae76a7e35f872c05466e5c7314820964e8c802fe85e0822802613f"} Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.309629 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.415514 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.415602 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.415643 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktkd2\" (UniqueName: \"kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.420347 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.424229 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.431896 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2" (OuterVolumeSpecName: "kube-api-access-ktkd2") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "kube-api-access-ktkd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.415666 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.435080 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.435123 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.435996 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.436011 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.436023 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktkd2\" (UniqueName: \"kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.441625 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts" (OuterVolumeSpecName: "scripts") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.512657 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.545723 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.545751 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.588801 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data" (OuterVolumeSpecName: "config-data") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.655416 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.731727 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 15:48:11 crc kubenswrapper[4739]: W0121 15:48:11.734313 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f733769_d3f8_4ced_be3b_cbb84339dac5.slice/crio-6e6c2b44562cf8b1a5729653e6bd87b1907f5bb5df4f11a8cbb9a40b29414676 WatchSource:0}: Error finding container 6e6c2b44562cf8b1a5729653e6bd87b1907f5bb5df4f11a8cbb9a40b29414676: Status 404 returned error can't find the container with id 6e6c2b44562cf8b1a5729653e6bd87b1907f5bb5df4f11a8cbb9a40b29414676 Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.213787 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerDied","Data":"a33c22381a2431a5d5a985f009f84a51a3c4e02d87387c395648e543219c46c5"} Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.213861 4739 scope.go:117] "RemoveContainer" containerID="d9032c575c2477c968dccbbf4e3af7feeec3fb419544675f1c5e79c829f032bb" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.214048 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.217579 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f733769-d3f8-4ced-be3b-cbb84339dac5","Type":"ContainerStarted","Data":"6e6c2b44562cf8b1a5729653e6bd87b1907f5bb5df4f11a8cbb9a40b29414676"} Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.253450 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.262947 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.278198 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:12 crc kubenswrapper[4739]: E0121 15:48:12.278991 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="probe" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.279011 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="probe" Jan 21 15:48:12 crc kubenswrapper[4739]: E0121 15:48:12.279024 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="cinder-scheduler" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.279030 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="cinder-scheduler" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.279200 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="cinder-scheduler" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.279221 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="probe" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.280292 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.284956 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.290290 4739 scope.go:117] "RemoveContainer" containerID="95a4f2c6c1ae76a7e35f872c05466e5c7314820964e8c802fe85e0822802613f" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.319924 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.367486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.368410 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcqtn\" (UniqueName: \"kubernetes.io/projected/27acefc8-6355-40dc-aaa8-84029c626a0b-kube-api-access-mcqtn\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.368540 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.368734 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-scripts\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.368835 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.369003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27acefc8-6355-40dc-aaa8-84029c626a0b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.470798 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-scripts\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.471944 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.472111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27acefc8-6355-40dc-aaa8-84029c626a0b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.472272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.472358 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcqtn\" (UniqueName: \"kubernetes.io/projected/27acefc8-6355-40dc-aaa8-84029c626a0b-kube-api-access-mcqtn\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.472430 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.473948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27acefc8-6355-40dc-aaa8-84029c626a0b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.477110 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-scripts\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.483627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.484486 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.485002 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.516485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcqtn\" (UniqueName: \"kubernetes.io/projected/27acefc8-6355-40dc-aaa8-84029c626a0b-kube-api-access-mcqtn\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.609962 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.838023 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e00032-f7f2-4119-9959-855f772bde19" path="/var/lib/kubelet/pods/d5e00032-f7f2-4119-9959-855f772bde19/volumes" Jan 21 15:48:13 crc kubenswrapper[4739]: I0121 15:48:13.081620 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:13 crc kubenswrapper[4739]: I0121 15:48:13.231003 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"27acefc8-6355-40dc-aaa8-84029c626a0b","Type":"ContainerStarted","Data":"9ff8d41474925ef7cc6cdb19cff84e2e1db653e4e697b718b3ed0f19fd54d4f3"} Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.041289 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.252483 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"27acefc8-6355-40dc-aaa8-84029c626a0b","Type":"ContainerStarted","Data":"77fb25ea41a2d5d4fb0e8ad39bfdaa9f8bab7457252c922cbbc26b348ecb3a2d"} Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.453071 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c6c95c866-nplmh" podUID="08457213-f4e0-4334-a1b0-a569bb5077ba" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.150:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.479044 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.565592 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.568792 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" containerID="cri-o://218fea87f37935d55ebbdf80f88caad3f2d151586bd75d9d510ae19122a9cad3" gracePeriod=30 Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.569553 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" containerID="cri-o://bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b" gracePeriod=30 Jan 21 15:48:15 crc kubenswrapper[4739]: I0121 15:48:15.342743 4739 generic.go:334] "Generic (PLEG): container finished" podID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerID="218fea87f37935d55ebbdf80f88caad3f2d151586bd75d9d510ae19122a9cad3" exitCode=143 Jan 21 15:48:15 crc kubenswrapper[4739]: I0121 15:48:15.343122 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerDied","Data":"218fea87f37935d55ebbdf80f88caad3f2d151586bd75d9d510ae19122a9cad3"} Jan 21 15:48:15 crc kubenswrapper[4739]: I0121 15:48:15.742074 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="340cac45-4a1b-404b-abf0-24e2eb31980b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.151:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:16 crc kubenswrapper[4739]: I0121 15:48:16.360400 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"27acefc8-6355-40dc-aaa8-84029c626a0b","Type":"ContainerStarted","Data":"439ce4326211cb9472aefe60beccab6af18d0cfc72b534e50a8779fdb6de17f0"} Jan 21 15:48:16 crc kubenswrapper[4739]: I0121 15:48:16.740988 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="340cac45-4a1b-404b-abf0-24e2eb31980b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.151:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:18 crc kubenswrapper[4739]: I0121 15:48:18.111971 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": read tcp 10.217.0.2:42058->10.217.0.144:9311: read: connection reset by peer" Jan 21 15:48:18 crc kubenswrapper[4739]: I0121 15:48:18.112009 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": read tcp 10.217.0.2:42074->10.217.0.144:9311: read: connection reset by peer" Jan 21 15:48:18 crc kubenswrapper[4739]: E0121 15:48:18.217448 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5322ea6d_a0d2_4bb1_a3e9_9202e52d292e.slice/crio-bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:48:18 crc kubenswrapper[4739]: I0121 15:48:18.376547 4739 generic.go:334] "Generic (PLEG): container finished" podID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerID="bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b" exitCode=0 Jan 21 15:48:18 crc kubenswrapper[4739]: I0121 15:48:18.377668 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerDied","Data":"bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b"} Jan 21 15:48:18 crc kubenswrapper[4739]: I0121 15:48:18.399367 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.399349659 podStartE2EDuration="6.399349659s" podCreationTimestamp="2026-01-21 15:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:18.393017616 +0000 UTC m=+1330.083723880" watchObservedRunningTime="2026-01-21 15:48:18.399349659 +0000 UTC m=+1330.090055923" Jan 21 15:48:19 crc kubenswrapper[4739]: I0121 15:48:19.712810 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 15:48:22 crc kubenswrapper[4739]: I0121 15:48:22.610303 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 15:48:22 crc kubenswrapper[4739]: I0121 15:48:22.836282 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 15:48:26 crc kubenswrapper[4739]: I0121 15:48:26.821174 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.012249 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.108409 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.108468 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.174725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom\") pod \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.176550 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r22n\" (UniqueName: \"kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n\") pod \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.176980 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data\") pod \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.177111 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs\") pod \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.177228 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle\") pod \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.177699 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs" (OuterVolumeSpecName: "logs") pod "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" (UID: "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.180765 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" (UID: "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.180960 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n" (OuterVolumeSpecName: "kube-api-access-8r22n") pod "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" (UID: "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e"). InnerVolumeSpecName "kube-api-access-8r22n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.213309 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" (UID: "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.220860 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data" (OuterVolumeSpecName: "config-data") pod "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" (UID: "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.278472 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.278711 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.278789 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r22n\" (UniqueName: \"kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.278878 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.278953 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.501609 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerDied","Data":"5a9648a36b5a7cda7cc2a5615a5ea2242f6d1558a32a504899b7d452f960802b"} Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.501927 4739 scope.go:117] "RemoveContainer" containerID="bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.501723 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.549287 4739 scope.go:117] "RemoveContainer" containerID="218fea87f37935d55ebbdf80f88caad3f2d151586bd75d9d510ae19122a9cad3" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.552909 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.564195 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:48:28 crc kubenswrapper[4739]: E0121 15:48:28.742964 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 21 15:48:28 crc kubenswrapper[4739]: E0121 15:48:28.743217 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bch5f9h5d6hb5h64dh664h8h695h684h659hf5h547h98hfh66dh648h78hb7hcch5dfh57fh584h69h5bch7dhd5h578h5b8h65h89h66fhccq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62fdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(8f733769-d3f8-4ced-be3b-cbb84339dac5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:48:28 crc kubenswrapper[4739]: E0121 15:48:28.744426 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="8f733769-d3f8-4ced-be3b-cbb84339dac5" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.795900 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" path="/var/lib/kubelet/pods/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e/volumes" Jan 21 15:48:29 crc kubenswrapper[4739]: E0121 15:48:29.512599 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="8f733769-d3f8-4ced-be3b-cbb84339dac5" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.337501 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-x8jnb"] Jan 21 15:48:34 crc kubenswrapper[4739]: E0121 15:48:34.338114 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.338127 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" Jan 21 15:48:34 crc kubenswrapper[4739]: E0121 15:48:34.338151 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.338157 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.338310 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.338322 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.338856 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.347959 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x8jnb"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.425221 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-crxtp"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.426740 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.442688 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-crxtp"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.482410 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.482476 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9rv\" (UniqueName: \"kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.535599 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ade4-account-create-update-24sls"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.536774 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.539712 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.550214 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ade4-account-create-update-24sls"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.584387 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.584438 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh9rv\" (UniqueName: \"kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.584483 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.584562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p952c\" (UniqueName: \"kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.585144 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.607774 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh9rv\" (UniqueName: \"kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.632333 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kzsmk"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.633338 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.654911 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.686206 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p952c\" (UniqueName: \"kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.686247 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df74f\" (UniqueName: \"kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.686340 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.686371 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.687106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.693909 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kzsmk"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.705556 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p952c\" (UniqueName: \"kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.750262 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.775150 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5cdc-account-create-update-hvq6k"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.776545 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.779063 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.788678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.788760 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq7jl\" (UniqueName: \"kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.788797 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.788938 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df74f\" (UniqueName: \"kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.795761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.802106 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5cdc-account-create-update-hvq6k"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.825853 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df74f\" (UniqueName: \"kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.866416 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.893961 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2kvt\" (UniqueName: \"kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.896753 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.896961 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq7jl\" (UniqueName: \"kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.897043 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.897761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.960987 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3fec-account-create-update-9ktbn"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.962045 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.969456 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.974106 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3fec-account-create-update-9ktbn"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.980729 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq7jl\" (UniqueName: \"kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.000339 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2kvt\" (UniqueName: \"kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.000459 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.002334 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.061436 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2kvt\" (UniqueName: \"kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.103635 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slj56\" (UniqueName: \"kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.103722 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.108722 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.205284 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slj56\" (UniqueName: \"kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.205603 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.206609 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.222635 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.222683 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.227435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slj56\" (UniqueName: \"kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.275571 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.282596 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.325886 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x8jnb"] Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.535224 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-crxtp"] Jan 21 15:48:35 crc kubenswrapper[4739]: W0121 15:48:35.539464 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeda4862_d2cc_41a1_b82f_067b3c4ad84f.slice/crio-15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8 WatchSource:0}: Error finding container 15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8: Status 404 returned error can't find the container with id 15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8 Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.542418 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ade4-account-create-update-24sls"] Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.569030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crxtp" event={"ID":"fe9459ad-de74-49f2-b35f-040c2b873848","Type":"ContainerStarted","Data":"e5cba8b8056beea48c18a5f8fc4b2b1675bac832bf8d353b0a40e9213b2233a6"} Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.570588 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x8jnb" event={"ID":"f47244c1-eeda-40a8-b4ae-57e2d6175c7e","Type":"ContainerStarted","Data":"90942fed1dc8caeac557378b1734102ab94ef0a76d8b7dd6f3bec31499fbc5d8"} Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.571514 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ade4-account-create-update-24sls" event={"ID":"deda4862-d2cc-41a1-b82f-067b3c4ad84f","Type":"ContainerStarted","Data":"15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8"} Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.703171 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5cdc-account-create-update-hvq6k"] Jan 21 15:48:35 crc kubenswrapper[4739]: W0121 15:48:35.716241 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ed41032_b872_4711_ab4c_79ed5f33053f.slice/crio-94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa WatchSource:0}: Error finding container 94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa: Status 404 returned error can't find the container with id 94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.818729 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kzsmk"] Jan 21 15:48:35 crc kubenswrapper[4739]: W0121 15:48:35.819041 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eda7c2f_1cb1_4fcc_840b_16699d95e267.slice/crio-59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594 WatchSource:0}: Error finding container 59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594: Status 404 returned error can't find the container with id 59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594 Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.900842 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3fec-account-create-update-9ktbn"] Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.581111 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x8jnb" event={"ID":"f47244c1-eeda-40a8-b4ae-57e2d6175c7e","Type":"ContainerStarted","Data":"69e4d5b920517ef58ac5d3dac008032896abf337574869aeeb467435766327e2"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.583290 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kzsmk" event={"ID":"8eda7c2f-1cb1-4fcc-840b-16699d95e267","Type":"ContainerStarted","Data":"4b136cc5189c87022119314f55ea87e4885fcfc281f69cf42c236783e38ab3f6"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.583316 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kzsmk" event={"ID":"8eda7c2f-1cb1-4fcc-840b-16699d95e267","Type":"ContainerStarted","Data":"59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.585205 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" event={"ID":"5ed41032-b872-4711-ab4c-79ed5f33053f","Type":"ContainerStarted","Data":"79bfce8d9538722cfd4c3baeb131299242c4ac6e8900225e7fee9d8ed4de0466"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.585252 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" event={"ID":"5ed41032-b872-4711-ab4c-79ed5f33053f","Type":"ContainerStarted","Data":"94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.587159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" event={"ID":"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a","Type":"ContainerStarted","Data":"0c32e58de73231bba5d6cc2ab8080acddef62c83c50117e1a0a01fd39c99c056"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.587189 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" event={"ID":"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a","Type":"ContainerStarted","Data":"2b659a6b90d47024221e1ea847f3b121bad4f322b2285c65f8562e52622a50fb"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.589101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ade4-account-create-update-24sls" event={"ID":"deda4862-d2cc-41a1-b82f-067b3c4ad84f","Type":"ContainerStarted","Data":"e709a72658fab4553eb9d8c4b54807d7e274d682b97947cce8b032c1091184df"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.590759 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crxtp" event={"ID":"fe9459ad-de74-49f2-b35f-040c2b873848","Type":"ContainerStarted","Data":"e048ca2c679bb07c831356312120f78939de952de42f3923e2d50d5db0fc8aa5"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.604880 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-x8jnb" podStartSLOduration=2.60485513 podStartE2EDuration="2.60485513s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.599216326 +0000 UTC m=+1348.289922600" watchObservedRunningTime="2026-01-21 15:48:36.60485513 +0000 UTC m=+1348.295561404" Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.619206 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" podStartSLOduration=2.619181271 podStartE2EDuration="2.619181271s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.617805163 +0000 UTC m=+1348.308511437" watchObservedRunningTime="2026-01-21 15:48:36.619181271 +0000 UTC m=+1348.309887535" Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.648898 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-kzsmk" podStartSLOduration=2.648873481 podStartE2EDuration="2.648873481s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.632576226 +0000 UTC m=+1348.323282490" watchObservedRunningTime="2026-01-21 15:48:36.648873481 +0000 UTC m=+1348.339579755" Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.667398 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-crxtp" podStartSLOduration=2.667347084 podStartE2EDuration="2.667347084s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.644285605 +0000 UTC m=+1348.334991869" watchObservedRunningTime="2026-01-21 15:48:36.667347084 +0000 UTC m=+1348.358053338" Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.677145 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ade4-account-create-update-24sls" podStartSLOduration=2.6771280109999998 podStartE2EDuration="2.677128011s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.65875025 +0000 UTC m=+1348.349456514" watchObservedRunningTime="2026-01-21 15:48:36.677128011 +0000 UTC m=+1348.367834265" Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.689835 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" podStartSLOduration=2.689795976 podStartE2EDuration="2.689795976s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.670253604 +0000 UTC m=+1348.360959888" watchObservedRunningTime="2026-01-21 15:48:36.689795976 +0000 UTC m=+1348.380502240" Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.601445 4739 generic.go:334] "Generic (PLEG): container finished" podID="8eda7c2f-1cb1-4fcc-840b-16699d95e267" containerID="4b136cc5189c87022119314f55ea87e4885fcfc281f69cf42c236783e38ab3f6" exitCode=0 Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.601512 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kzsmk" event={"ID":"8eda7c2f-1cb1-4fcc-840b-16699d95e267","Type":"ContainerDied","Data":"4b136cc5189c87022119314f55ea87e4885fcfc281f69cf42c236783e38ab3f6"} Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.604638 4739 generic.go:334] "Generic (PLEG): container finished" podID="f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" containerID="0c32e58de73231bba5d6cc2ab8080acddef62c83c50117e1a0a01fd39c99c056" exitCode=0 Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.604720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" event={"ID":"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a","Type":"ContainerDied","Data":"0c32e58de73231bba5d6cc2ab8080acddef62c83c50117e1a0a01fd39c99c056"} Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.613297 4739 generic.go:334] "Generic (PLEG): container finished" podID="f47244c1-eeda-40a8-b4ae-57e2d6175c7e" containerID="69e4d5b920517ef58ac5d3dac008032896abf337574869aeeb467435766327e2" exitCode=0 Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.614617 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x8jnb" event={"ID":"f47244c1-eeda-40a8-b4ae-57e2d6175c7e","Type":"ContainerDied","Data":"69e4d5b920517ef58ac5d3dac008032896abf337574869aeeb467435766327e2"} Jan 21 15:48:38 crc kubenswrapper[4739]: I0121 15:48:38.621755 4739 generic.go:334] "Generic (PLEG): container finished" podID="fe9459ad-de74-49f2-b35f-040c2b873848" containerID="e048ca2c679bb07c831356312120f78939de952de42f3923e2d50d5db0fc8aa5" exitCode=0 Jan 21 15:48:38 crc kubenswrapper[4739]: I0121 15:48:38.621888 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crxtp" event={"ID":"fe9459ad-de74-49f2-b35f-040c2b873848","Type":"ContainerDied","Data":"e048ca2c679bb07c831356312120f78939de952de42f3923e2d50d5db0fc8aa5"} Jan 21 15:48:38 crc kubenswrapper[4739]: I0121 15:48:38.985761 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.113342 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq7jl\" (UniqueName: \"kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl\") pod \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.113456 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts\") pod \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.115366 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8eda7c2f-1cb1-4fcc-840b-16699d95e267" (UID: "8eda7c2f-1cb1-4fcc-840b-16699d95e267"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.135964 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl" (OuterVolumeSpecName: "kube-api-access-mq7jl") pod "8eda7c2f-1cb1-4fcc-840b-16699d95e267" (UID: "8eda7c2f-1cb1-4fcc-840b-16699d95e267"). InnerVolumeSpecName "kube-api-access-mq7jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.192370 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.203943 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.217208 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq7jl\" (UniqueName: \"kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.217245 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.318577 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slj56\" (UniqueName: \"kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56\") pod \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.318659 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh9rv\" (UniqueName: \"kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv\") pod \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.318809 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts\") pod \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.318876 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts\") pod \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.319284 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f47244c1-eeda-40a8-b4ae-57e2d6175c7e" (UID: "f47244c1-eeda-40a8-b4ae-57e2d6175c7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.319606 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" (UID: "f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.321505 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56" (OuterVolumeSpecName: "kube-api-access-slj56") pod "f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" (UID: "f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a"). InnerVolumeSpecName "kube-api-access-slj56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.322977 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv" (OuterVolumeSpecName: "kube-api-access-wh9rv") pod "f47244c1-eeda-40a8-b4ae-57e2d6175c7e" (UID: "f47244c1-eeda-40a8-b4ae-57e2d6175c7e"). InnerVolumeSpecName "kube-api-access-wh9rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.421132 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.421163 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slj56\" (UniqueName: \"kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.421175 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh9rv\" (UniqueName: \"kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.421185 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.630945 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x8jnb" event={"ID":"f47244c1-eeda-40a8-b4ae-57e2d6175c7e","Type":"ContainerDied","Data":"90942fed1dc8caeac557378b1734102ab94ef0a76d8b7dd6f3bec31499fbc5d8"} Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.630989 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90942fed1dc8caeac557378b1734102ab94ef0a76d8b7dd6f3bec31499fbc5d8" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.631042 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.636354 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kzsmk" event={"ID":"8eda7c2f-1cb1-4fcc-840b-16699d95e267","Type":"ContainerDied","Data":"59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594"} Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.636414 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.636474 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.639495 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.639573 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" event={"ID":"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a","Type":"ContainerDied","Data":"2b659a6b90d47024221e1ea847f3b121bad4f322b2285c65f8562e52622a50fb"} Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.639620 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b659a6b90d47024221e1ea847f3b121bad4f322b2285c65f8562e52622a50fb" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:39.999692 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.132969 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p952c\" (UniqueName: \"kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c\") pod \"fe9459ad-de74-49f2-b35f-040c2b873848\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.133045 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts\") pod \"fe9459ad-de74-49f2-b35f-040c2b873848\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.134010 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe9459ad-de74-49f2-b35f-040c2b873848" (UID: "fe9459ad-de74-49f2-b35f-040c2b873848"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.139405 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c" (OuterVolumeSpecName: "kube-api-access-p952c") pod "fe9459ad-de74-49f2-b35f-040c2b873848" (UID: "fe9459ad-de74-49f2-b35f-040c2b873848"). InnerVolumeSpecName "kube-api-access-p952c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.235074 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p952c\" (UniqueName: \"kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.235107 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.658940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crxtp" event={"ID":"fe9459ad-de74-49f2-b35f-040c2b873848","Type":"ContainerDied","Data":"e5cba8b8056beea48c18a5f8fc4b2b1675bac832bf8d353b0a40e9213b2233a6"} Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.659284 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5cba8b8056beea48c18a5f8fc4b2b1675bac832bf8d353b0a40e9213b2233a6" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.659362 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.670278 4739 generic.go:334] "Generic (PLEG): container finished" podID="deda4862-d2cc-41a1-b82f-067b3c4ad84f" containerID="e709a72658fab4553eb9d8c4b54807d7e274d682b97947cce8b032c1091184df" exitCode=0 Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.670611 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ade4-account-create-update-24sls" event={"ID":"deda4862-d2cc-41a1-b82f-067b3c4ad84f","Type":"ContainerDied","Data":"e709a72658fab4553eb9d8c4b54807d7e274d682b97947cce8b032c1091184df"} Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.673241 4739 generic.go:334] "Generic (PLEG): container finished" podID="b1635150-ea8b-4b37-b129-7ade970b52ee" containerID="b2a14f9f0596b7114bc9be07e6d7387e73ae65d715e86a7eab8f4b3ca063b86f" exitCode=0 Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.673300 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5znj" event={"ID":"b1635150-ea8b-4b37-b129-7ade970b52ee","Type":"ContainerDied","Data":"b2a14f9f0596b7114bc9be07e6d7387e73ae65d715e86a7eab8f4b3ca063b86f"} Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.675005 4739 generic.go:334] "Generic (PLEG): container finished" podID="5ed41032-b872-4711-ab4c-79ed5f33053f" containerID="79bfce8d9538722cfd4c3baeb131299242c4ac6e8900225e7fee9d8ed4de0466" exitCode=0 Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.675045 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" event={"ID":"5ed41032-b872-4711-ab4c-79ed5f33053f","Type":"ContainerDied","Data":"79bfce8d9538722cfd4c3baeb131299242c4ac6e8900225e7fee9d8ed4de0466"} Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.198218 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5znj" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.297570 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2sbf\" (UniqueName: \"kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf\") pod \"b1635150-ea8b-4b37-b129-7ade970b52ee\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.297788 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config\") pod \"b1635150-ea8b-4b37-b129-7ade970b52ee\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.297865 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle\") pod \"b1635150-ea8b-4b37-b129-7ade970b52ee\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.325161 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf" (OuterVolumeSpecName: "kube-api-access-j2sbf") pod "b1635150-ea8b-4b37-b129-7ade970b52ee" (UID: "b1635150-ea8b-4b37-b129-7ade970b52ee"). InnerVolumeSpecName "kube-api-access-j2sbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.330429 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1635150-ea8b-4b37-b129-7ade970b52ee" (UID: "b1635150-ea8b-4b37-b129-7ade970b52ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.354506 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config" (OuterVolumeSpecName: "config") pod "b1635150-ea8b-4b37-b129-7ade970b52ee" (UID: "b1635150-ea8b-4b37-b129-7ade970b52ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.401930 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2sbf\" (UniqueName: \"kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.401965 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.401975 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.693844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5znj" event={"ID":"b1635150-ea8b-4b37-b129-7ade970b52ee","Type":"ContainerDied","Data":"72e20bece7d457dfe26cae2233b3f23885681f4d1b39178d8953cf117a853bc0"} Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.693891 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72e20bece7d457dfe26cae2233b3f23885681f4d1b39178d8953cf117a853bc0" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.693995 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5znj" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.776886 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.784751 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.915176 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts\") pod \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.915303 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2kvt\" (UniqueName: \"kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt\") pod \"5ed41032-b872-4711-ab4c-79ed5f33053f\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.915349 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts\") pod \"5ed41032-b872-4711-ab4c-79ed5f33053f\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.915424 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df74f\" (UniqueName: \"kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f\") pod \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.915838 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "deda4862-d2cc-41a1-b82f-067b3c4ad84f" (UID: "deda4862-d2cc-41a1-b82f-067b3c4ad84f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.916149 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ed41032-b872-4711-ab4c-79ed5f33053f" (UID: "5ed41032-b872-4711-ab4c-79ed5f33053f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.920066 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt" (OuterVolumeSpecName: "kube-api-access-t2kvt") pod "5ed41032-b872-4711-ab4c-79ed5f33053f" (UID: "5ed41032-b872-4711-ab4c-79ed5f33053f"). InnerVolumeSpecName "kube-api-access-t2kvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.921689 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f" (OuterVolumeSpecName: "kube-api-access-df74f") pod "deda4862-d2cc-41a1-b82f-067b3c4ad84f" (UID: "deda4862-d2cc-41a1-b82f-067b3c4ad84f"). InnerVolumeSpecName "kube-api-access-df74f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.940718 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941122 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941144 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941160 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed41032-b872-4711-ab4c-79ed5f33053f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941166 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed41032-b872-4711-ab4c-79ed5f33053f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941175 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deda4862-d2cc-41a1-b82f-067b3c4ad84f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941181 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="deda4862-d2cc-41a1-b82f-067b3c4ad84f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941198 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eda7c2f-1cb1-4fcc-840b-16699d95e267" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941204 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eda7c2f-1cb1-4fcc-840b-16699d95e267" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941220 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47244c1-eeda-40a8-b4ae-57e2d6175c7e" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941227 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47244c1-eeda-40a8-b4ae-57e2d6175c7e" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941241 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9459ad-de74-49f2-b35f-040c2b873848" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941250 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9459ad-de74-49f2-b35f-040c2b873848" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941264 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1635150-ea8b-4b37-b129-7ade970b52ee" containerName="neutron-db-sync" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941271 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1635150-ea8b-4b37-b129-7ade970b52ee" containerName="neutron-db-sync" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941529 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47244c1-eeda-40a8-b4ae-57e2d6175c7e" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941548 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed41032-b872-4711-ab4c-79ed5f33053f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941557 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1635150-ea8b-4b37-b129-7ade970b52ee" containerName="neutron-db-sync" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941566 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe9459ad-de74-49f2-b35f-040c2b873848" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941576 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eda7c2f-1cb1-4fcc-840b-16699d95e267" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941584 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="deda4862-d2cc-41a1-b82f-067b3c4ad84f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941592 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.942722 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.957342 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020368 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020541 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020705 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw2w7\" (UniqueName: \"kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020987 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2kvt\" (UniqueName: \"kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.021012 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.021025 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df74f\" (UniqueName: \"kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.021057 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.130916 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.130988 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.131015 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.131047 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.131093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw2w7\" (UniqueName: \"kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.133259 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.136435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.136754 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.137354 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.156854 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw2w7\" (UniqueName: \"kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.281719 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.560333 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.565697 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.582638 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.582880 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.583023 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.583164 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nsbps" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.584388 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.646211 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.646256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.646289 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.646384 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.646500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4t8p\" (UniqueName: \"kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.705009 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" event={"ID":"5ed41032-b872-4711-ab4c-79ed5f33053f","Type":"ContainerDied","Data":"94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa"} Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.705067 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.705071 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.706681 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ade4-account-create-update-24sls" event={"ID":"deda4862-d2cc-41a1-b82f-067b3c4ad84f","Type":"ContainerDied","Data":"15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8"} Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.706721 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.706721 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.748073 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.748129 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.748167 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.748209 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.748332 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4t8p\" (UniqueName: \"kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.781806 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.782560 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.784260 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.791655 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4t8p\" (UniqueName: \"kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.797648 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.830864 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.897742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:45 crc kubenswrapper[4739]: I0121 15:48:45.641026 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:48:45 crc kubenswrapper[4739]: W0121 15:48:45.655046 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod116a13ea_fefe_44b4_8542_34cf022a48e0.slice/crio-7f621dd0af13584a18e1f228fc6f1fda414c2019e33c47c0cc2876d661b31342 WatchSource:0}: Error finding container 7f621dd0af13584a18e1f228fc6f1fda414c2019e33c47c0cc2876d661b31342: Status 404 returned error can't find the container with id 7f621dd0af13584a18e1f228fc6f1fda414c2019e33c47c0cc2876d661b31342 Jan 21 15:48:45 crc kubenswrapper[4739]: I0121 15:48:45.726978 4739 generic.go:334] "Generic (PLEG): container finished" podID="5091d434-2266-4386-a1b1-ce00719cd889" containerID="dfe43fc7f1dc6cc96c1db90a080ec794f13e7877032c122bc215992616badebc" exitCode=0 Jan 21 15:48:45 crc kubenswrapper[4739]: I0121 15:48:45.727117 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" event={"ID":"5091d434-2266-4386-a1b1-ce00719cd889","Type":"ContainerDied","Data":"dfe43fc7f1dc6cc96c1db90a080ec794f13e7877032c122bc215992616badebc"} Jan 21 15:48:45 crc kubenswrapper[4739]: I0121 15:48:45.727184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" event={"ID":"5091d434-2266-4386-a1b1-ce00719cd889","Type":"ContainerStarted","Data":"e034200d9d2fe17264411387abcf6da9e0fcd72661056799249816cb13df0c87"} Jan 21 15:48:45 crc kubenswrapper[4739]: I0121 15:48:45.730896 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerStarted","Data":"7f621dd0af13584a18e1f228fc6f1fda414c2019e33c47c0cc2876d661b31342"} Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.742010 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" event={"ID":"5091d434-2266-4386-a1b1-ce00719cd889","Type":"ContainerStarted","Data":"bcea766c958dc0049c65ebd81f7c4fc80c8c997206175e767632b67a5ef03c71"} Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.742595 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.743608 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f733769-d3f8-4ced-be3b-cbb84339dac5","Type":"ContainerStarted","Data":"c246066db45347b75f0931918186123ca025e604ddc4889f153f49ced9a698a0"} Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.746243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerStarted","Data":"b1eedbc779db3931f269ee9211c785588dfd42b6278308a08269e355b304783f"} Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.746367 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerStarted","Data":"8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9"} Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.746952 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.775459 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" podStartSLOduration=3.775440542 podStartE2EDuration="3.775440542s" podCreationTimestamp="2026-01-21 15:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:46.76213224 +0000 UTC m=+1358.452838504" watchObservedRunningTime="2026-01-21 15:48:46.775440542 +0000 UTC m=+1358.466146806" Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.795063 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-766cc5675b-dbqhs" podStartSLOduration=2.795047717 podStartE2EDuration="2.795047717s" podCreationTimestamp="2026-01-21 15:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:46.792339063 +0000 UTC m=+1358.483045327" watchObservedRunningTime="2026-01-21 15:48:46.795047717 +0000 UTC m=+1358.485753971" Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.820379 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.018107063 podStartE2EDuration="36.820358677s" podCreationTimestamp="2026-01-21 15:48:10 +0000 UTC" firstStartedPulling="2026-01-21 15:48:11.736664573 +0000 UTC m=+1323.427370837" lastFinishedPulling="2026-01-21 15:48:45.538916187 +0000 UTC m=+1357.229622451" observedRunningTime="2026-01-21 15:48:46.814077106 +0000 UTC m=+1358.504783370" watchObservedRunningTime="2026-01-21 15:48:46.820358677 +0000 UTC m=+1358.511064941" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.587870 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9b578bfdc-tzd9g"] Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.589713 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.592059 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.592640 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.606278 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9b578bfdc-tzd9g"] Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.706456 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-internal-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.706890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-ovndb-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.707026 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfd9b\" (UniqueName: \"kubernetes.io/projected/91caca26-903d-4f3c-ba18-c31a43c9df73-kube-api-access-pfd9b\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.707066 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-combined-ca-bundle\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.707093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-public-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.707196 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.707272 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-httpd-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809112 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-httpd-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-internal-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809302 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-ovndb-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfd9b\" (UniqueName: \"kubernetes.io/projected/91caca26-903d-4f3c-ba18-c31a43c9df73-kube-api-access-pfd9b\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-combined-ca-bundle\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809433 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-public-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.816295 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.817404 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-ovndb-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.821374 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-internal-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.826549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-combined-ca-bundle\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.826680 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-httpd-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.846458 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfd9b\" (UniqueName: \"kubernetes.io/projected/91caca26-903d-4f3c-ba18-c31a43c9df73-kube-api-access-pfd9b\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.846730 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-public-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.921212 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:48 crc kubenswrapper[4739]: I0121 15:48:48.582756 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9b578bfdc-tzd9g"] Jan 21 15:48:48 crc kubenswrapper[4739]: I0121 15:48:48.762450 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b578bfdc-tzd9g" event={"ID":"91caca26-903d-4f3c-ba18-c31a43c9df73","Type":"ContainerStarted","Data":"1e063753f0b966b9b4025a0964e55094b8c1588c754bccbf1172fd3f14433879"} Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.579978 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.580848 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-central-agent" containerID="cri-o://dd0646ed77e930080acfbb6f8657f0770afbb11b2245f30e3e6a65bd3587ff90" gracePeriod=30 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.580948 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-notification-agent" containerID="cri-o://6b96f689ee9e12a088809ec4fe36a34032926af662682529b60ab93609df0595" gracePeriod=30 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.580961 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="sg-core" containerID="cri-o://4447d0ddbe5f72d785db75ba20f6aef58695008ba60d9aafe826c3486bef96b0" gracePeriod=30 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.581158 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="proxy-httpd" containerID="cri-o://85f16bfba68487291f8ff8231d72fd07ea67fe123fcbd148bbd91c4d05795294" gracePeriod=30 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.629016 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.629210 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" containerName="kube-state-metrics" containerID="cri-o://e444fc0aa8d4387b17fa5ef680ddd69e93b254caba9e8f75545bfd7fb1aa1b31" gracePeriod=30 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.817426 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b578bfdc-tzd9g" event={"ID":"91caca26-903d-4f3c-ba18-c31a43c9df73","Type":"ContainerStarted","Data":"6f3734d2249bb2c439b0ee1a8e5bea53e320cca15b4cd94958407efc75f9f1f3"} Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.817476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b578bfdc-tzd9g" event={"ID":"91caca26-903d-4f3c-ba18-c31a43c9df73","Type":"ContainerStarted","Data":"41f9ba5c9b4b761c4c48b1eb0c3ad5fdd722c316cf4c998656e3bcb31967430a"} Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.818608 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.826642 4739 generic.go:334] "Generic (PLEG): container finished" podID="582ba37d-9e3e-4696-a70e-69e702c6f931" containerID="e444fc0aa8d4387b17fa5ef680ddd69e93b254caba9e8f75545bfd7fb1aa1b31" exitCode=2 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.826720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"582ba37d-9e3e-4696-a70e-69e702c6f931","Type":"ContainerDied","Data":"e444fc0aa8d4387b17fa5ef680ddd69e93b254caba9e8f75545bfd7fb1aa1b31"} Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.875259 4739 generic.go:334] "Generic (PLEG): container finished" podID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerID="4447d0ddbe5f72d785db75ba20f6aef58695008ba60d9aafe826c3486bef96b0" exitCode=2 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.875323 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerDied","Data":"4447d0ddbe5f72d785db75ba20f6aef58695008ba60d9aafe826c3486bef96b0"} Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.891363 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9b578bfdc-tzd9g" podStartSLOduration=2.8913330999999998 podStartE2EDuration="2.8913331s" podCreationTimestamp="2026-01-21 15:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:49.873735941 +0000 UTC m=+1361.564442225" watchObservedRunningTime="2026-01-21 15:48:49.8913331 +0000 UTC m=+1361.582039354" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.147455 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfndp"] Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.149782 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.152266 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.153689 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.154086 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lfw7x" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.194011 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfndp"] Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.276511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.276847 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.276997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.277040 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24wlx\" (UniqueName: \"kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.357252 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.378681 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24wlx\" (UniqueName: \"kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.378795 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.378847 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.379004 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.386269 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.386345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.387988 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.429307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24wlx\" (UniqueName: \"kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.480688 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k86x\" (UniqueName: \"kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x\") pod \"582ba37d-9e3e-4696-a70e-69e702c6f931\" (UID: \"582ba37d-9e3e-4696-a70e-69e702c6f931\") " Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.491142 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x" (OuterVolumeSpecName: "kube-api-access-4k86x") pod "582ba37d-9e3e-4696-a70e-69e702c6f931" (UID: "582ba37d-9e3e-4696-a70e-69e702c6f931"). InnerVolumeSpecName "kube-api-access-4k86x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.516295 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.583552 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k86x\" (UniqueName: \"kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.897053 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"582ba37d-9e3e-4696-a70e-69e702c6f931","Type":"ContainerDied","Data":"61ece0ca2bec34a69b536ce6fa39aec53042c12094f4235644f0b42c3bd4677d"} Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.897398 4739 scope.go:117] "RemoveContainer" containerID="e444fc0aa8d4387b17fa5ef680ddd69e93b254caba9e8f75545bfd7fb1aa1b31" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.897221 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.906616 4739 generic.go:334] "Generic (PLEG): container finished" podID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerID="85f16bfba68487291f8ff8231d72fd07ea67fe123fcbd148bbd91c4d05795294" exitCode=0 Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.906647 4739 generic.go:334] "Generic (PLEG): container finished" podID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerID="dd0646ed77e930080acfbb6f8657f0770afbb11b2245f30e3e6a65bd3587ff90" exitCode=0 Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.906715 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerDied","Data":"85f16bfba68487291f8ff8231d72fd07ea67fe123fcbd148bbd91c4d05795294"} Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.906840 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerDied","Data":"dd0646ed77e930080acfbb6f8657f0770afbb11b2245f30e3e6a65bd3587ff90"} Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.931327 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.946379 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.968858 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:50 crc kubenswrapper[4739]: E0121 15:48:50.969241 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" containerName="kube-state-metrics" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.969258 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" containerName="kube-state-metrics" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.973903 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" containerName="kube-state-metrics" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.974553 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.979214 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.979365 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.993123 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.060872 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfndp"] Jan 21 15:48:51 crc kubenswrapper[4739]: W0121 15:48:51.073730 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f2f9172_8721_4518_ac4e_eec07c9fe663.slice/crio-daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807 WatchSource:0}: Error finding container daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807: Status 404 returned error can't find the container with id daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807 Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.100367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.100426 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.100546 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x98qh\" (UniqueName: \"kubernetes.io/projected/7a559158-ae1f-4b55-bf71-90061b51b807-kube-api-access-x98qh\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.100613 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.202672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x98qh\" (UniqueName: \"kubernetes.io/projected/7a559158-ae1f-4b55-bf71-90061b51b807-kube-api-access-x98qh\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.202756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.202915 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.202943 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.208360 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.209113 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.211295 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.229509 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x98qh\" (UniqueName: \"kubernetes.io/projected/7a559158-ae1f-4b55-bf71-90061b51b807-kube-api-access-x98qh\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.233711 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.235556 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.282098 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.299584 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.304376 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96thl\" (UniqueName: \"kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.304918 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.305096 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.407136 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96thl\" (UniqueName: \"kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.407516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.408001 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.408488 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.408742 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.435227 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96thl\" (UniqueName: \"kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.699605 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.928192 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfndp" event={"ID":"7f2f9172-8721-4518-ac4e-eec07c9fe663","Type":"ContainerStarted","Data":"daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807"} Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.950043 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:52 crc kubenswrapper[4739]: I0121 15:48:52.248487 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:48:52 crc kubenswrapper[4739]: I0121 15:48:52.794081 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" path="/var/lib/kubelet/pods/582ba37d-9e3e-4696-a70e-69e702c6f931/volumes" Jan 21 15:48:52 crc kubenswrapper[4739]: I0121 15:48:52.938448 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerStarted","Data":"38cf7c08783b3706c4332fc09d24c7f21d7a00b0a9bcd6590f4c3e121d931487"} Jan 21 15:48:52 crc kubenswrapper[4739]: I0121 15:48:52.939533 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a559158-ae1f-4b55-bf71-90061b51b807","Type":"ContainerStarted","Data":"1bfb7820ffa851171082a880ece6372160dbe2b22a254a3bcf71bafc032f6fd0"} Jan 21 15:48:53 crc kubenswrapper[4739]: I0121 15:48:53.949866 4739 generic.go:334] "Generic (PLEG): container finished" podID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerID="6b96f689ee9e12a088809ec4fe36a34032926af662682529b60ab93609df0595" exitCode=0 Jan 21 15:48:53 crc kubenswrapper[4739]: I0121 15:48:53.949921 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerDied","Data":"6b96f689ee9e12a088809ec4fe36a34032926af662682529b60ab93609df0595"} Jan 21 15:48:53 crc kubenswrapper[4739]: I0121 15:48:53.955307 4739 generic.go:334] "Generic (PLEG): container finished" podID="63170e4a-4759-4950-a949-7cf2c0f24335" containerID="a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15" exitCode=0 Jan 21 15:48:53 crc kubenswrapper[4739]: I0121 15:48:53.955332 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerDied","Data":"a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15"} Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.282963 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.286240 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.380752 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.380842 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.380876 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.380919 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.380949 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.381055 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwpgd\" (UniqueName: \"kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.381160 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.390247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.390543 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.390858 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="dnsmasq-dns" containerID="cri-o://fba44da8a7e7cf66299ef445796c138b334f24d352689bbbac06140c006da565" gracePeriod=10 Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.392450 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.396140 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd" (OuterVolumeSpecName: "kube-api-access-bwpgd") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "kube-api-access-bwpgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.397213 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts" (OuterVolumeSpecName: "scripts") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.483198 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwpgd\" (UniqueName: \"kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.483598 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.483749 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.483901 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.485099 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.556932 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.599320 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.599475 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.660065 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data" (OuterVolumeSpecName: "config-data") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.701627 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.971440 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerDied","Data":"8178637c93490cb1b6b2251656fd24d36a3d98273536c99ade77ced7e9e0266e"} Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.971504 4739 scope.go:117] "RemoveContainer" containerID="85f16bfba68487291f8ff8231d72fd07ea67fe123fcbd148bbd91c4d05795294" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.971508 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.978655 4739 generic.go:334] "Generic (PLEG): container finished" podID="63913da1-1f11-4850-9e92-a75afe2013f7" containerID="fba44da8a7e7cf66299ef445796c138b334f24d352689bbbac06140c006da565" exitCode=0 Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.978696 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" event={"ID":"63913da1-1f11-4850-9e92-a75afe2013f7","Type":"ContainerDied","Data":"fba44da8a7e7cf66299ef445796c138b334f24d352689bbbac06140c006da565"} Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.985776 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.003862 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.031809 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077079 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077559 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-notification-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077582 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-notification-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077600 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="proxy-httpd" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077609 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="proxy-httpd" Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077626 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="dnsmasq-dns" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077635 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="dnsmasq-dns" Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077651 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="sg-core" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077658 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="sg-core" Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077669 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="init" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077678 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="init" Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077707 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-central-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077714 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-central-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077923 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="proxy-httpd" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077935 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="sg-core" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077949 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-notification-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077959 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="dnsmasq-dns" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077968 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-central-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.080048 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.083448 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.083808 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.084091 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.093226 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.110306 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config\") pod \"63913da1-1f11-4850-9e92-a75afe2013f7\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.110372 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb\") pod \"63913da1-1f11-4850-9e92-a75afe2013f7\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.110498 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc\") pod \"63913da1-1f11-4850-9e92-a75afe2013f7\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.110521 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjgb9\" (UniqueName: \"kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9\") pod \"63913da1-1f11-4850-9e92-a75afe2013f7\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.110592 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb\") pod \"63913da1-1f11-4850-9e92-a75afe2013f7\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.141571 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9" (OuterVolumeSpecName: "kube-api-access-pjgb9") pod "63913da1-1f11-4850-9e92-a75afe2013f7" (UID: "63913da1-1f11-4850-9e92-a75afe2013f7"). InnerVolumeSpecName "kube-api-access-pjgb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.202956 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "63913da1-1f11-4850-9e92-a75afe2013f7" (UID: "63913da1-1f11-4850-9e92-a75afe2013f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2l6\" (UniqueName: \"kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212719 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212917 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212993 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.213017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.213090 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.213104 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjgb9\" (UniqueName: \"kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.234472 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config" (OuterVolumeSpecName: "config") pod "63913da1-1f11-4850-9e92-a75afe2013f7" (UID: "63913da1-1f11-4850-9e92-a75afe2013f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.241079 4739 scope.go:117] "RemoveContainer" containerID="4447d0ddbe5f72d785db75ba20f6aef58695008ba60d9aafe826c3486bef96b0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.263106 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "63913da1-1f11-4850-9e92-a75afe2013f7" (UID: "63913da1-1f11-4850-9e92-a75afe2013f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.303084 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63913da1-1f11-4850-9e92-a75afe2013f7" (UID: "63913da1-1f11-4850-9e92-a75afe2013f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315107 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2l6\" (UniqueName: \"kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315182 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315262 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315292 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315360 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315434 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315518 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315537 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315638 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315659 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315671 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.316295 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.317159 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.321963 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.326751 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.331609 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.331615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.332541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.336371 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2l6\" (UniqueName: \"kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.398861 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.485345 4739 scope.go:117] "RemoveContainer" containerID="6b96f689ee9e12a088809ec4fe36a34032926af662682529b60ab93609df0595" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.543284 4739 scope.go:117] "RemoveContainer" containerID="dd0646ed77e930080acfbb6f8657f0770afbb11b2245f30e3e6a65bd3587ff90" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.902756 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:55 crc kubenswrapper[4739]: W0121 15:48:55.910638 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e0be13e_8a7f_43b4_86e1_50a8249890f4.slice/crio-8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a WatchSource:0}: Error finding container 8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a: Status 404 returned error can't find the container with id 8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.998415 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" event={"ID":"63913da1-1f11-4850-9e92-a75afe2013f7","Type":"ContainerDied","Data":"1b39dcf58e2eff40de38a5ef2feefae8fb7d5ed95e0566e20b66ac63802c2ca3"} Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.998456 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.998469 4739 scope.go:117] "RemoveContainer" containerID="fba44da8a7e7cf66299ef445796c138b334f24d352689bbbac06140c006da565" Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.002640 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a559158-ae1f-4b55-bf71-90061b51b807","Type":"ContainerStarted","Data":"617f3d461f67389cc854eaa108a16213ad6e588f425798a3a00937f45133f738"} Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.003710 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.010809 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerStarted","Data":"8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a"} Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.030213 4739 scope.go:117] "RemoveContainer" containerID="52cf3fb66c6197c3e5dc6c64add6ba1ef29236ed9f6b4f4d76dda982e2abc1bb" Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.034684 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.806826029 podStartE2EDuration="6.034656326s" podCreationTimestamp="2026-01-21 15:48:50 +0000 UTC" firstStartedPulling="2026-01-21 15:48:51.957853558 +0000 UTC m=+1363.648559822" lastFinishedPulling="2026-01-21 15:48:54.185683865 +0000 UTC m=+1365.876390119" observedRunningTime="2026-01-21 15:48:56.026562085 +0000 UTC m=+1367.717268349" watchObservedRunningTime="2026-01-21 15:48:56.034656326 +0000 UTC m=+1367.725362590" Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.050918 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.058743 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.804457 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" path="/var/lib/kubelet/pods/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf/volumes" Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.805848 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" path="/var/lib/kubelet/pods/63913da1-1f11-4850-9e92-a75afe2013f7/volumes" Jan 21 15:48:57 crc kubenswrapper[4739]: I0121 15:48:57.023314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerStarted","Data":"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14"} Jan 21 15:48:59 crc kubenswrapper[4739]: I0121 15:48:59.043178 4739 generic.go:334] "Generic (PLEG): container finished" podID="63170e4a-4759-4950-a949-7cf2c0f24335" containerID="a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14" exitCode=0 Jan 21 15:48:59 crc kubenswrapper[4739]: I0121 15:48:59.043487 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerDied","Data":"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14"} Jan 21 15:49:01 crc kubenswrapper[4739]: I0121 15:49:01.313850 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 15:49:02 crc kubenswrapper[4739]: I0121 15:49:02.076085 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerStarted","Data":"e1a0cfec5d871a1c191a6f0ceeb52e1244f4d502d752ae4eac06d1e06bae88e6"} Jan 21 15:49:05 crc kubenswrapper[4739]: I0121 15:49:05.227934 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:49:05 crc kubenswrapper[4739]: I0121 15:49:05.228276 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:49:05 crc kubenswrapper[4739]: I0121 15:49:05.228343 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:49:05 crc kubenswrapper[4739]: I0121 15:49:05.229408 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:49:05 crc kubenswrapper[4739]: I0121 15:49:05.229466 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4" gracePeriod=600 Jan 21 15:49:09 crc kubenswrapper[4739]: I0121 15:49:09.142375 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4" exitCode=0 Jan 21 15:49:09 crc kubenswrapper[4739]: I0121 15:49:09.142861 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4"} Jan 21 15:49:09 crc kubenswrapper[4739]: I0121 15:49:09.142904 4739 scope.go:117] "RemoveContainer" containerID="19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c" Jan 21 15:49:14 crc kubenswrapper[4739]: I0121 15:49:14.905854 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-766cc5675b-dbqhs" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:14 crc kubenswrapper[4739]: I0121 15:49:14.906101 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-766cc5675b-dbqhs" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:14 crc kubenswrapper[4739]: I0121 15:49:14.906341 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-766cc5675b-dbqhs" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:17 crc kubenswrapper[4739]: I0121 15:49:17.651061 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="27acefc8-6355-40dc-aaa8-84029c626a0b" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.153:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:49:17 crc kubenswrapper[4739]: I0121 15:49:17.932535 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-9b578bfdc-tzd9g" podUID="91caca26-903d-4f3c-ba18-c31a43c9df73" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:17 crc kubenswrapper[4739]: I0121 15:49:17.932775 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-9b578bfdc-tzd9g" podUID="91caca26-903d-4f3c-ba18-c31a43c9df73" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:17 crc kubenswrapper[4739]: I0121 15:49:17.933861 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-9b578bfdc-tzd9g" podUID="91caca26-903d-4f3c-ba18-c31a43c9df73" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:18 crc kubenswrapper[4739]: E0121 15:49:18.759964 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Jan 21 15:49:18 crc kubenswrapper[4739]: E0121 15:49:18.760864 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24wlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-bfndp_openstack(7f2f9172-8721-4518-ac4e-eec07c9fe663): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:49:18 crc kubenswrapper[4739]: E0121 15:49:18.762479 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-bfndp" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" Jan 21 15:49:19 crc kubenswrapper[4739]: I0121 15:49:19.231749 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896"} Jan 21 15:49:19 crc kubenswrapper[4739]: E0121 15:49:19.293405 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-bfndp" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" Jan 21 15:49:21 crc kubenswrapper[4739]: I0121 15:49:21.249652 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerStarted","Data":"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383"} Jan 21 15:49:21 crc kubenswrapper[4739]: I0121 15:49:21.251640 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerStarted","Data":"7d1f49a7e691f354754bbffb98546428a5ee0192e0097bc7632c31b508b3cdc3"} Jan 21 15:49:21 crc kubenswrapper[4739]: I0121 15:49:21.280608 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dbkhd" podStartSLOduration=4.749559449 podStartE2EDuration="30.280585188s" podCreationTimestamp="2026-01-21 15:48:51 +0000 UTC" firstStartedPulling="2026-01-21 15:48:54.184082551 +0000 UTC m=+1365.874788815" lastFinishedPulling="2026-01-21 15:49:19.71510829 +0000 UTC m=+1391.405814554" observedRunningTime="2026-01-21 15:49:21.271633723 +0000 UTC m=+1392.962339997" watchObservedRunningTime="2026-01-21 15:49:21.280585188 +0000 UTC m=+1392.971291442" Jan 21 15:49:21 crc kubenswrapper[4739]: I0121 15:49:21.701074 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:21 crc kubenswrapper[4739]: I0121 15:49:21.701620 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:22 crc kubenswrapper[4739]: I0121 15:49:22.280960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerStarted","Data":"e3b39c9c97114dd0136f345c99d7b037721d21f078a00fb78c42b0a3b24d68c0"} Jan 21 15:49:22 crc kubenswrapper[4739]: I0121 15:49:22.289653 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:22 crc kubenswrapper[4739]: I0121 15:49:22.760416 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dbkhd" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="registry-server" probeResult="failure" output=< Jan 21 15:49:22 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 15:49:22 crc kubenswrapper[4739]: > Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293319 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerStarted","Data":"bc9e119eff2e7a6c529493da874d386d6c6032a66d8565d65b50219ca616276b"} Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293844 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293678 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="sg-core" containerID="cri-o://e3b39c9c97114dd0136f345c99d7b037721d21f078a00fb78c42b0a3b24d68c0" gracePeriod=30 Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293645 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="proxy-httpd" containerID="cri-o://bc9e119eff2e7a6c529493da874d386d6c6032a66d8565d65b50219ca616276b" gracePeriod=30 Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293939 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-central-agent" containerID="cri-o://e1a0cfec5d871a1c191a6f0ceeb52e1244f4d502d752ae4eac06d1e06bae88e6" gracePeriod=30 Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293715 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-notification-agent" containerID="cri-o://7d1f49a7e691f354754bbffb98546428a5ee0192e0097bc7632c31b508b3cdc3" gracePeriod=30 Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.329237 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.396510386 podStartE2EDuration="28.329220567s" podCreationTimestamp="2026-01-21 15:48:55 +0000 UTC" firstStartedPulling="2026-01-21 15:48:55.913771067 +0000 UTC m=+1367.604477331" lastFinishedPulling="2026-01-21 15:49:22.846481248 +0000 UTC m=+1394.537187512" observedRunningTime="2026-01-21 15:49:23.323607684 +0000 UTC m=+1395.014313948" watchObservedRunningTime="2026-01-21 15:49:23.329220567 +0000 UTC m=+1395.019926831" Jan 21 15:49:24 crc kubenswrapper[4739]: I0121 15:49:24.304953 4739 generic.go:334] "Generic (PLEG): container finished" podID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerID="e3b39c9c97114dd0136f345c99d7b037721d21f078a00fb78c42b0a3b24d68c0" exitCode=2 Jan 21 15:49:24 crc kubenswrapper[4739]: I0121 15:49:24.304992 4739 generic.go:334] "Generic (PLEG): container finished" podID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerID="7d1f49a7e691f354754bbffb98546428a5ee0192e0097bc7632c31b508b3cdc3" exitCode=0 Jan 21 15:49:24 crc kubenswrapper[4739]: I0121 15:49:24.305026 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerDied","Data":"e3b39c9c97114dd0136f345c99d7b037721d21f078a00fb78c42b0a3b24d68c0"} Jan 21 15:49:24 crc kubenswrapper[4739]: I0121 15:49:24.305079 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerDied","Data":"7d1f49a7e691f354754bbffb98546428a5ee0192e0097bc7632c31b508b3cdc3"} Jan 21 15:49:25 crc kubenswrapper[4739]: I0121 15:49:25.316451 4739 generic.go:334] "Generic (PLEG): container finished" podID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerID="e1a0cfec5d871a1c191a6f0ceeb52e1244f4d502d752ae4eac06d1e06bae88e6" exitCode=0 Jan 21 15:49:25 crc kubenswrapper[4739]: I0121 15:49:25.316533 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerDied","Data":"e1a0cfec5d871a1c191a6f0ceeb52e1244f4d502d752ae4eac06d1e06bae88e6"} Jan 21 15:49:31 crc kubenswrapper[4739]: I0121 15:49:31.744878 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:31 crc kubenswrapper[4739]: I0121 15:49:31.804973 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:31 crc kubenswrapper[4739]: I0121 15:49:31.983022 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:49:32 crc kubenswrapper[4739]: I0121 15:49:32.384017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfndp" event={"ID":"7f2f9172-8721-4518-ac4e-eec07c9fe663","Type":"ContainerStarted","Data":"64ae28312ee2b4216d7fbd5bbdda04698ad326561300c21ef589ce642e1cd225"} Jan 21 15:49:32 crc kubenswrapper[4739]: I0121 15:49:32.417220 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bfndp" podStartSLOduration=2.077854065 podStartE2EDuration="42.417196739s" podCreationTimestamp="2026-01-21 15:48:50 +0000 UTC" firstStartedPulling="2026-01-21 15:48:51.0763383 +0000 UTC m=+1362.767044564" lastFinishedPulling="2026-01-21 15:49:31.415680984 +0000 UTC m=+1403.106387238" observedRunningTime="2026-01-21 15:49:32.409478397 +0000 UTC m=+1404.100184661" watchObservedRunningTime="2026-01-21 15:49:32.417196739 +0000 UTC m=+1404.107903003" Jan 21 15:49:33 crc kubenswrapper[4739]: I0121 15:49:33.408657 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dbkhd" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="registry-server" containerID="cri-o://b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383" gracePeriod=2 Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.056054 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.142980 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96thl\" (UniqueName: \"kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl\") pod \"63170e4a-4759-4950-a949-7cf2c0f24335\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.143268 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content\") pod \"63170e4a-4759-4950-a949-7cf2c0f24335\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.143376 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities\") pod \"63170e4a-4759-4950-a949-7cf2c0f24335\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.144435 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities" (OuterVolumeSpecName: "utilities") pod "63170e4a-4759-4950-a949-7cf2c0f24335" (UID: "63170e4a-4759-4950-a949-7cf2c0f24335"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.150054 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl" (OuterVolumeSpecName: "kube-api-access-96thl") pod "63170e4a-4759-4950-a949-7cf2c0f24335" (UID: "63170e4a-4759-4950-a949-7cf2c0f24335"). InnerVolumeSpecName "kube-api-access-96thl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.245800 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96thl\" (UniqueName: \"kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.246096 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.287097 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63170e4a-4759-4950-a949-7cf2c0f24335" (UID: "63170e4a-4759-4950-a949-7cf2c0f24335"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.348191 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.419569 4739 generic.go:334] "Generic (PLEG): container finished" podID="63170e4a-4759-4950-a949-7cf2c0f24335" containerID="b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383" exitCode=0 Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.419613 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerDied","Data":"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383"} Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.419647 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerDied","Data":"38cf7c08783b3706c4332fc09d24c7f21d7a00b0a9bcd6590f4c3e121d931487"} Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.419648 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.419665 4739 scope.go:117] "RemoveContainer" containerID="b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.457342 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.464039 4739 scope.go:117] "RemoveContainer" containerID="a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.466522 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.488185 4739 scope.go:117] "RemoveContainer" containerID="a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.528518 4739 scope.go:117] "RemoveContainer" containerID="b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383" Jan 21 15:49:34 crc kubenswrapper[4739]: E0121 15:49:34.529283 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383\": container with ID starting with b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383 not found: ID does not exist" containerID="b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.529324 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383"} err="failed to get container status \"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383\": rpc error: code = NotFound desc = could not find container \"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383\": container with ID starting with b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383 not found: ID does not exist" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.529350 4739 scope.go:117] "RemoveContainer" containerID="a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14" Jan 21 15:49:34 crc kubenswrapper[4739]: E0121 15:49:34.529707 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14\": container with ID starting with a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14 not found: ID does not exist" containerID="a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.529729 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14"} err="failed to get container status \"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14\": rpc error: code = NotFound desc = could not find container \"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14\": container with ID starting with a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14 not found: ID does not exist" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.529745 4739 scope.go:117] "RemoveContainer" containerID="a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15" Jan 21 15:49:34 crc kubenswrapper[4739]: E0121 15:49:34.530145 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15\": container with ID starting with a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15 not found: ID does not exist" containerID="a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.530167 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15"} err="failed to get container status \"a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15\": rpc error: code = NotFound desc = could not find container \"a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15\": container with ID starting with a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15 not found: ID does not exist" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.792731 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" path="/var/lib/kubelet/pods/63170e4a-4759-4950-a949-7cf2c0f24335/volumes" Jan 21 15:49:44 crc kubenswrapper[4739]: I0121 15:49:44.915041 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:49:47 crc kubenswrapper[4739]: I0121 15:49:47.943625 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:49:48 crc kubenswrapper[4739]: I0121 15:49:48.020467 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:49:48 crc kubenswrapper[4739]: I0121 15:49:48.020677 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-766cc5675b-dbqhs" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-api" containerID="cri-o://8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9" gracePeriod=30 Jan 21 15:49:48 crc kubenswrapper[4739]: I0121 15:49:48.020960 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-766cc5675b-dbqhs" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" containerID="cri-o://b1eedbc779db3931f269ee9211c785588dfd42b6278308a08269e355b304783f" gracePeriod=30 Jan 21 15:49:48 crc kubenswrapper[4739]: I0121 15:49:48.558970 4739 generic.go:334] "Generic (PLEG): container finished" podID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerID="b1eedbc779db3931f269ee9211c785588dfd42b6278308a08269e355b304783f" exitCode=0 Jan 21 15:49:48 crc kubenswrapper[4739]: I0121 15:49:48.559020 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerDied","Data":"b1eedbc779db3931f269ee9211c785588dfd42b6278308a08269e355b304783f"} Jan 21 15:49:50 crc kubenswrapper[4739]: E0121 15:49:50.580980 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod116a13ea_fefe_44b4_8542_34cf022a48e0.slice/crio-8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:49:50 crc kubenswrapper[4739]: I0121 15:49:50.591700 4739 generic.go:334] "Generic (PLEG): container finished" podID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerID="8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9" exitCode=0 Jan 21 15:49:50 crc kubenswrapper[4739]: I0121 15:49:50.591802 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerDied","Data":"8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9"} Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.126601 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.315005 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs\") pod \"116a13ea-fefe-44b4-8542-34cf022a48e0\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.315065 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle\") pod \"116a13ea-fefe-44b4-8542-34cf022a48e0\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.315107 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4t8p\" (UniqueName: \"kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p\") pod \"116a13ea-fefe-44b4-8542-34cf022a48e0\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.315151 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config\") pod \"116a13ea-fefe-44b4-8542-34cf022a48e0\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.315225 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config\") pod \"116a13ea-fefe-44b4-8542-34cf022a48e0\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.322291 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p" (OuterVolumeSpecName: "kube-api-access-v4t8p") pod "116a13ea-fefe-44b4-8542-34cf022a48e0" (UID: "116a13ea-fefe-44b4-8542-34cf022a48e0"). InnerVolumeSpecName "kube-api-access-v4t8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.328669 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "116a13ea-fefe-44b4-8542-34cf022a48e0" (UID: "116a13ea-fefe-44b4-8542-34cf022a48e0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.369966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "116a13ea-fefe-44b4-8542-34cf022a48e0" (UID: "116a13ea-fefe-44b4-8542-34cf022a48e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.383657 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config" (OuterVolumeSpecName: "config") pod "116a13ea-fefe-44b4-8542-34cf022a48e0" (UID: "116a13ea-fefe-44b4-8542-34cf022a48e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.397448 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "116a13ea-fefe-44b4-8542-34cf022a48e0" (UID: "116a13ea-fefe-44b4-8542-34cf022a48e0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.416749 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.416791 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4t8p\" (UniqueName: \"kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.416801 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.416811 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.416836 4739 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.601922 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerDied","Data":"7f621dd0af13584a18e1f228fc6f1fda414c2019e33c47c0cc2876d661b31342"} Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.602587 4739 scope.go:117] "RemoveContainer" containerID="b1eedbc779db3931f269ee9211c785588dfd42b6278308a08269e355b304783f" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.602780 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.646468 4739 scope.go:117] "RemoveContainer" containerID="8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.647671 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.659344 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:49:52 crc kubenswrapper[4739]: I0121 15:49:52.610581 4739 generic.go:334] "Generic (PLEG): container finished" podID="7f2f9172-8721-4518-ac4e-eec07c9fe663" containerID="64ae28312ee2b4216d7fbd5bbdda04698ad326561300c21ef589ce642e1cd225" exitCode=0 Jan 21 15:49:52 crc kubenswrapper[4739]: I0121 15:49:52.610662 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfndp" event={"ID":"7f2f9172-8721-4518-ac4e-eec07c9fe663","Type":"ContainerDied","Data":"64ae28312ee2b4216d7fbd5bbdda04698ad326561300c21ef589ce642e1cd225"} Jan 21 15:49:52 crc kubenswrapper[4739]: I0121 15:49:52.792788 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" path="/var/lib/kubelet/pods/116a13ea-fefe-44b4-8542-34cf022a48e0/volumes" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.623135 4739 generic.go:334] "Generic (PLEG): container finished" podID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerID="bc9e119eff2e7a6c529493da874d386d6c6032a66d8565d65b50219ca616276b" exitCode=137 Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.623217 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerDied","Data":"bc9e119eff2e7a6c529493da874d386d6c6032a66d8565d65b50219ca616276b"} Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.623589 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerDied","Data":"8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a"} Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.623612 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.675449 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.861701 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.861751 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.861777 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.861847 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm2l6\" (UniqueName: \"kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.861912 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.862772 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.862892 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.863117 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.863219 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.863881 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.864570 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.868081 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts" (OuterVolumeSpecName: "scripts") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.877072 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6" (OuterVolumeSpecName: "kube-api-access-rm2l6") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "kube-api-access-rm2l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.921056 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.936196 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.956632 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.971558 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24wlx\" (UniqueName: \"kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx\") pod \"7f2f9172-8721-4518-ac4e-eec07c9fe663\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.971671 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts\") pod \"7f2f9172-8721-4518-ac4e-eec07c9fe663\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.971941 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle\") pod \"7f2f9172-8721-4518-ac4e-eec07c9fe663\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.972150 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data\") pod \"7f2f9172-8721-4518-ac4e-eec07c9fe663\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.973779 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.973793 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.973834 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm2l6\" (UniqueName: \"kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.973844 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.973853 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.976361 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data" (OuterVolumeSpecName: "config-data") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.979247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts" (OuterVolumeSpecName: "scripts") pod "7f2f9172-8721-4518-ac4e-eec07c9fe663" (UID: "7f2f9172-8721-4518-ac4e-eec07c9fe663"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.980672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx" (OuterVolumeSpecName: "kube-api-access-24wlx") pod "7f2f9172-8721-4518-ac4e-eec07c9fe663" (UID: "7f2f9172-8721-4518-ac4e-eec07c9fe663"). InnerVolumeSpecName "kube-api-access-24wlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.986095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.999649 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data" (OuterVolumeSpecName: "config-data") pod "7f2f9172-8721-4518-ac4e-eec07c9fe663" (UID: "7f2f9172-8721-4518-ac4e-eec07c9fe663"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.006149 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f2f9172-8721-4518-ac4e-eec07c9fe663" (UID: "7f2f9172-8721-4518-ac4e-eec07c9fe663"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075566 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24wlx\" (UniqueName: \"kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075599 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075610 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075620 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075632 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075641 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.634702 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfndp" event={"ID":"7f2f9172-8721-4518-ac4e-eec07c9fe663","Type":"ContainerDied","Data":"daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807"} Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.635118 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.634737 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.634724 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.686781 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.696268 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712274 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712687 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="registry-server" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712715 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="registry-server" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712729 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-notification-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712740 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-notification-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712757 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="proxy-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712766 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="proxy-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712783 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="sg-core" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712791 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="sg-core" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712805 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" containerName="nova-cell0-conductor-db-sync" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712813 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" containerName="nova-cell0-conductor-db-sync" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712849 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="extract-utilities" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712858 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="extract-utilities" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712889 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="extract-content" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712899 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="extract-content" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712919 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-central-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712927 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-central-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712941 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712950 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712964 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-api" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712971 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-api" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713206 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="registry-server" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713226 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-central-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713242 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="proxy-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713251 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" containerName="nova-cell0-conductor-db-sync" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713261 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="sg-core" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713277 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713299 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-api" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713310 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-notification-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.715124 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.739958 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.740238 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.742228 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.749412 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787572 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787623 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjvlv\" (UniqueName: \"kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787685 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787700 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787720 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787735 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787770 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.794193 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" path="/var/lib/kubelet/pods/2e0be13e-8a7f-43b4-86e1-50a8249890f4/volumes" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.795022 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.796544 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.802422 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lfw7x" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.808421 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.810607 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889129 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889563 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvnk8\" (UniqueName: \"kubernetes.io/projected/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-kube-api-access-vvnk8\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889725 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889812 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889934 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjvlv\" (UniqueName: \"kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.890009 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.890051 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.890087 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.890207 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.891702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.896533 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.896800 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.897106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.900208 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.921004 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjvlv\" (UniqueName: \"kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.928916 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.991417 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvnk8\" (UniqueName: \"kubernetes.io/projected/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-kube-api-access-vvnk8\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.991519 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.991654 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.996882 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.997316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.011368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvnk8\" (UniqueName: \"kubernetes.io/projected/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-kube-api-access-vvnk8\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.029896 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.120997 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.600295 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.658781 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerStarted","Data":"e77898541118cfa971f128dff0eb382e3a341312cf058739a5aae30d4d0aa454"} Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.914309 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:49:55 crc kubenswrapper[4739]: W0121 15:49:55.916752 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6e43f8_c2d1_4991_992b_30ebd3fc66cf.slice/crio-ba5827cf18e2a879ce85f3c55ab5e8ffb34c9c3a001136394387d8ebae0f9022 WatchSource:0}: Error finding container ba5827cf18e2a879ce85f3c55ab5e8ffb34c9c3a001136394387d8ebae0f9022: Status 404 returned error can't find the container with id ba5827cf18e2a879ce85f3c55ab5e8ffb34c9c3a001136394387d8ebae0f9022 Jan 21 15:49:56 crc kubenswrapper[4739]: I0121 15:49:56.668109 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf","Type":"ContainerStarted","Data":"1a26997a1518409a79b1bfdbc5414a85a6e599a5f0c6049578157ac199e52f4f"} Jan 21 15:49:56 crc kubenswrapper[4739]: I0121 15:49:56.668422 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf","Type":"ContainerStarted","Data":"ba5827cf18e2a879ce85f3c55ab5e8ffb34c9c3a001136394387d8ebae0f9022"} Jan 21 15:49:56 crc kubenswrapper[4739]: I0121 15:49:56.668442 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:56 crc kubenswrapper[4739]: I0121 15:49:56.669300 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerStarted","Data":"b22817c8a5cb39cc6763571d607f1c923d6dabbc5658d4b2464e2fc924d6f575"} Jan 21 15:49:56 crc kubenswrapper[4739]: I0121 15:49:56.693754 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.693728837 podStartE2EDuration="2.693728837s" podCreationTimestamp="2026-01-21 15:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:56.688260417 +0000 UTC m=+1428.378966681" watchObservedRunningTime="2026-01-21 15:49:56.693728837 +0000 UTC m=+1428.384435101" Jan 21 15:49:57 crc kubenswrapper[4739]: I0121 15:49:57.682926 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerStarted","Data":"f15d3576665e6705dd2ba9cc17c9d91faba9cc3c04fed079c630fcf4e96bfe39"} Jan 21 15:49:58 crc kubenswrapper[4739]: I0121 15:49:58.692928 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerStarted","Data":"f1c8825e4749e739931f3583d3e8296636e6ef0e0797e70c4e11452d270976d1"} Jan 21 15:50:00 crc kubenswrapper[4739]: I0121 15:50:00.718221 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerStarted","Data":"2adbf1319e38888304527bb70bd138dbce0a356cfc2492346e7127e6dca73073"} Jan 21 15:50:00 crc kubenswrapper[4739]: I0121 15:50:00.718918 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:50:00 crc kubenswrapper[4739]: I0121 15:50:00.753230 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6148098859999998 podStartE2EDuration="6.75320135s" podCreationTimestamp="2026-01-21 15:49:54 +0000 UTC" firstStartedPulling="2026-01-21 15:49:55.614166544 +0000 UTC m=+1427.304872818" lastFinishedPulling="2026-01-21 15:49:59.752558018 +0000 UTC m=+1431.443264282" observedRunningTime="2026-01-21 15:50:00.746387713 +0000 UTC m=+1432.437093987" watchObservedRunningTime="2026-01-21 15:50:00.75320135 +0000 UTC m=+1432.443908284" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.150707 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.657291 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7jt2b"] Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.661568 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.664279 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.673345 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7jt2b"] Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.676035 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.708681 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.708755 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.708928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.708981 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpmj\" (UniqueName: \"kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.811029 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpmj\" (UniqueName: \"kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.811122 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.811189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.811355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.821005 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.824581 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.839459 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.863607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpmj\" (UniqueName: \"kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.869432 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.885757 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.892329 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.899608 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.915495 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b982m\" (UniqueName: \"kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.915585 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.915628 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.915645 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.978874 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.039591 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.039895 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.040034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b982m\" (UniqueName: \"kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.040131 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.042285 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.048186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.049084 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.052735 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.081200 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.081570 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.109793 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.141057 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.141173 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.141222 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.152485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b982m\" (UniqueName: \"kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.160061 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.161379 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.185233 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.219902 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.221239 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.224286 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243018 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243112 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9sv9\" (UniqueName: \"kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243233 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243297 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.253398 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.257924 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.260494 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.272518 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.293303 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.302102 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345765 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9sv9\" (UniqueName: \"kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345829 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345859 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345881 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345908 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345926 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgsk\" (UniqueName: \"kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.347569 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.353783 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.361504 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.367636 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.369097 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.379418 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9sv9\" (UniqueName: \"kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450667 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450708 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgsk\" (UniqueName: \"kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450760 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450811 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450936 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7tj9\" (UniqueName: \"kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450968 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.451050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.458557 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.463508 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.464688 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.477802 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgsk\" (UniqueName: \"kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.494710 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.510849 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.585418 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.586385 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.586500 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.586537 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.586622 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7tj9\" (UniqueName: \"kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.586680 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.587338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.587721 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.588298 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.592645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.623722 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7tj9\" (UniqueName: \"kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.698167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.950078 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7jt2b"] Jan 21 15:50:06 crc kubenswrapper[4739]: W0121 15:50:06.963424 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbee6ce08_4c84_436e_bf6c_78edfd72079e.slice/crio-cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf WatchSource:0}: Error finding container cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf: Status 404 returned error can't find the container with id cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.176634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.200926 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.257021 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ps2tj"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.258509 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.261930 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.262473 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.267601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.267694 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcffr\" (UniqueName: \"kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.267742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.270591 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.282287 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ps2tj"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.317567 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.372679 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.372746 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.372805 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcffr\" (UniqueName: \"kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.372868 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.383674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.388418 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.388620 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.392503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcffr\" (UniqueName: \"kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.515234 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.557889 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:07 crc kubenswrapper[4739]: W0121 15:50:07.564567 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac8c2262_2594_4058_a243_3d253315507d.slice/crio-63ca043f317390f3324ce1e47461c1159ad4e28ca828fd9a4ce2a22f72aaf95e WatchSource:0}: Error finding container 63ca043f317390f3324ce1e47461c1159ad4e28ca828fd9a4ce2a22f72aaf95e: Status 404 returned error can't find the container with id 63ca043f317390f3324ce1e47461c1159ad4e28ca828fd9a4ce2a22f72aaf95e Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.587687 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.811713 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerStarted","Data":"cd054e3186b65e13c831256094c8d78183d241118f5f0222014b89f943cfeb49"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.813718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerStarted","Data":"b7f3f2c8839db57ca9ea84ab093ba98b849f20cd54f510f023a4d74cdb39800e"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.815114 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1782a09d-e578-4628-bff0-c745b8fc5b33","Type":"ContainerStarted","Data":"dd119fb8c085ad74cdde916029bf058ec070273c83f1f37068667b12423f7bc9"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.816895 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerStarted","Data":"63ca043f317390f3324ce1e47461c1159ad4e28ca828fd9a4ce2a22f72aaf95e"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.834222 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"961aae12-5a2d-4166-a897-1aa496d25ce2","Type":"ContainerStarted","Data":"a5d6ca0e09184dd575178e9f566e5c10ecc3f8a3b718b6cc7ba6599515b2f0fb"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.837418 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7jt2b" event={"ID":"bee6ce08-4c84-436e-bf6c-78edfd72079e","Type":"ContainerStarted","Data":"5b8179165447cef12f007a52d92471b3add91f61832db6a1bec046d4bb82e28b"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.837470 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7jt2b" event={"ID":"bee6ce08-4c84-436e-bf6c-78edfd72079e","Type":"ContainerStarted","Data":"cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.875887 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7jt2b" podStartSLOduration=2.875872174 podStartE2EDuration="2.875872174s" podCreationTimestamp="2026-01-21 15:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:07.86368058 +0000 UTC m=+1439.554386844" watchObservedRunningTime="2026-01-21 15:50:07.875872174 +0000 UTC m=+1439.566578438" Jan 21 15:50:08 crc kubenswrapper[4739]: I0121 15:50:08.338397 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ps2tj"] Jan 21 15:50:08 crc kubenswrapper[4739]: I0121 15:50:08.850675 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" event={"ID":"a5fdc51e-5890-4f55-8693-275865a73e2a","Type":"ContainerStarted","Data":"a70dedce532492d42f780d135e8fa508d4b75bf2ce7c6594aee874115e104f13"} Jan 21 15:50:09 crc kubenswrapper[4739]: I0121 15:50:09.854375 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:09 crc kubenswrapper[4739]: I0121 15:50:09.872075 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:09 crc kubenswrapper[4739]: I0121 15:50:09.879381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerStarted","Data":"324f31c4acc1b021e278a47ea09ee3464459f5a2b5e3b05d96b40c7e75fa1f9b"} Jan 21 15:50:10 crc kubenswrapper[4739]: I0121 15:50:10.894137 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" event={"ID":"a5fdc51e-5890-4f55-8693-275865a73e2a","Type":"ContainerStarted","Data":"4798236393baf528c0c4993b5af62d7ba7d89ae6096c4966bb99e447397af0a0"} Jan 21 15:50:10 crc kubenswrapper[4739]: I0121 15:50:10.899665 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac8c2262-2594-4058-a243-3d253315507d" containerID="324f31c4acc1b021e278a47ea09ee3464459f5a2b5e3b05d96b40c7e75fa1f9b" exitCode=0 Jan 21 15:50:10 crc kubenswrapper[4739]: I0121 15:50:10.899717 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerDied","Data":"324f31c4acc1b021e278a47ea09ee3464459f5a2b5e3b05d96b40c7e75fa1f9b"} Jan 21 15:50:10 crc kubenswrapper[4739]: I0121 15:50:10.924936 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" podStartSLOduration=3.924714532 podStartE2EDuration="3.924714532s" podCreationTimestamp="2026-01-21 15:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:10.918350409 +0000 UTC m=+1442.609056673" watchObservedRunningTime="2026-01-21 15:50:10.924714532 +0000 UTC m=+1442.615420796" Jan 21 15:50:11 crc kubenswrapper[4739]: I0121 15:50:11.916412 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerStarted","Data":"8321b2eb6ac94c0eb07dfc0f3e625deeb67295a0ad976532397caca096d227dd"} Jan 21 15:50:11 crc kubenswrapper[4739]: I0121 15:50:11.916499 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:11 crc kubenswrapper[4739]: I0121 15:50:11.962806 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" podStartSLOduration=5.962761619 podStartE2EDuration="5.962761619s" podCreationTimestamp="2026-01-21 15:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:11.960546798 +0000 UTC m=+1443.651253082" watchObservedRunningTime="2026-01-21 15:50:11.962761619 +0000 UTC m=+1443.653467883" Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.952536 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"961aae12-5a2d-4166-a897-1aa496d25ce2","Type":"ContainerStarted","Data":"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.958689 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerStarted","Data":"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.958738 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerStarted","Data":"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.958878 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-log" containerID="cri-o://3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" gracePeriod=30 Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.959003 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-metadata" containerID="cri-o://a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" gracePeriod=30 Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.965099 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerStarted","Data":"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.965147 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerStarted","Data":"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.972008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1782a09d-e578-4628-bff0-c745b8fc5b33","Type":"ContainerStarted","Data":"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.973928 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1782a09d-e578-4628-bff0-c745b8fc5b33" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be" gracePeriod=30 Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.974604 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.5529845460000002 podStartE2EDuration="8.97458942s" podCreationTimestamp="2026-01-21 15:50:05 +0000 UTC" firstStartedPulling="2026-01-21 15:50:07.205135747 +0000 UTC m=+1438.895842011" lastFinishedPulling="2026-01-21 15:50:12.626740621 +0000 UTC m=+1444.317446885" observedRunningTime="2026-01-21 15:50:13.972210304 +0000 UTC m=+1445.662916568" watchObservedRunningTime="2026-01-21 15:50:13.97458942 +0000 UTC m=+1445.665295684" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.002396 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.805930613 podStartE2EDuration="8.00234792s" podCreationTimestamp="2026-01-21 15:50:06 +0000 UTC" firstStartedPulling="2026-01-21 15:50:07.528123252 +0000 UTC m=+1439.218829516" lastFinishedPulling="2026-01-21 15:50:12.724540559 +0000 UTC m=+1444.415246823" observedRunningTime="2026-01-21 15:50:13.991717019 +0000 UTC m=+1445.682423293" watchObservedRunningTime="2026-01-21 15:50:14.00234792 +0000 UTC m=+1445.693054194" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.030599 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.727339102 podStartE2EDuration="8.030575733s" podCreationTimestamp="2026-01-21 15:50:06 +0000 UTC" firstStartedPulling="2026-01-21 15:50:07.322507982 +0000 UTC m=+1439.013214246" lastFinishedPulling="2026-01-21 15:50:12.625744613 +0000 UTC m=+1444.316450877" observedRunningTime="2026-01-21 15:50:14.016661202 +0000 UTC m=+1445.707367476" watchObservedRunningTime="2026-01-21 15:50:14.030575733 +0000 UTC m=+1445.721282007" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.048335 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.6115924809999997 podStartE2EDuration="9.048315428s" podCreationTimestamp="2026-01-21 15:50:05 +0000 UTC" firstStartedPulling="2026-01-21 15:50:07.186659912 +0000 UTC m=+1438.877366176" lastFinishedPulling="2026-01-21 15:50:12.623382859 +0000 UTC m=+1444.314089123" observedRunningTime="2026-01-21 15:50:14.036352201 +0000 UTC m=+1445.727058465" watchObservedRunningTime="2026-01-21 15:50:14.048315428 +0000 UTC m=+1445.739021712" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.540559 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615036 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs\") pod \"0102143e-dd8e-417e-aaa4-ed1567d5b471\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615135 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data\") pod \"0102143e-dd8e-417e-aaa4-ed1567d5b471\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615170 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle\") pod \"0102143e-dd8e-417e-aaa4-ed1567d5b471\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615190 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9sv9\" (UniqueName: \"kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9\") pod \"0102143e-dd8e-417e-aaa4-ed1567d5b471\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615404 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs" (OuterVolumeSpecName: "logs") pod "0102143e-dd8e-417e-aaa4-ed1567d5b471" (UID: "0102143e-dd8e-417e-aaa4-ed1567d5b471"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615523 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.620208 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9" (OuterVolumeSpecName: "kube-api-access-w9sv9") pod "0102143e-dd8e-417e-aaa4-ed1567d5b471" (UID: "0102143e-dd8e-417e-aaa4-ed1567d5b471"). InnerVolumeSpecName "kube-api-access-w9sv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.653601 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data" (OuterVolumeSpecName: "config-data") pod "0102143e-dd8e-417e-aaa4-ed1567d5b471" (UID: "0102143e-dd8e-417e-aaa4-ed1567d5b471"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.665701 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0102143e-dd8e-417e-aaa4-ed1567d5b471" (UID: "0102143e-dd8e-417e-aaa4-ed1567d5b471"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.716032 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.716060 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.716070 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9sv9\" (UniqueName: \"kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.980988 4739 generic.go:334] "Generic (PLEG): container finished" podID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerID="a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" exitCode=0 Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981288 4739 generic.go:334] "Generic (PLEG): container finished" podID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerID="3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" exitCode=143 Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981085 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerDied","Data":"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac"} Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981340 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerDied","Data":"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc"} Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerDied","Data":"cd054e3186b65e13c831256094c8d78183d241118f5f0222014b89f943cfeb49"} Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981065 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981394 4739 scope.go:117] "RemoveContainer" containerID="a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.024594 4739 scope.go:117] "RemoveContainer" containerID="3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.040579 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.089177 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.096099 4739 scope.go:117] "RemoveContainer" containerID="a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" Jan 21 15:50:15 crc kubenswrapper[4739]: E0121 15:50:15.109024 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac\": container with ID starting with a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac not found: ID does not exist" containerID="a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.109091 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac"} err="failed to get container status \"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac\": rpc error: code = NotFound desc = could not find container \"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac\": container with ID starting with a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac not found: ID does not exist" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.109123 4739 scope.go:117] "RemoveContainer" containerID="3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" Jan 21 15:50:15 crc kubenswrapper[4739]: E0121 15:50:15.109955 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc\": container with ID starting with 3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc not found: ID does not exist" containerID="3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.109995 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc"} err="failed to get container status \"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc\": rpc error: code = NotFound desc = could not find container \"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc\": container with ID starting with 3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc not found: ID does not exist" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.110024 4739 scope.go:117] "RemoveContainer" containerID="a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.110776 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac"} err="failed to get container status \"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac\": rpc error: code = NotFound desc = could not find container \"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac\": container with ID starting with a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac not found: ID does not exist" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.110801 4739 scope.go:117] "RemoveContainer" containerID="3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.110889 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:15 crc kubenswrapper[4739]: E0121 15:50:15.111516 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-log" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.111533 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-log" Jan 21 15:50:15 crc kubenswrapper[4739]: E0121 15:50:15.111568 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-metadata" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.111576 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-metadata" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.111763 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-metadata" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.111777 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-log" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.111784 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc"} err="failed to get container status \"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc\": rpc error: code = NotFound desc = could not find container \"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc\": container with ID starting with 3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc not found: ID does not exist" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.113187 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.124933 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.127232 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.142645 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.244116 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gjw\" (UniqueName: \"kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.244176 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.244409 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.244592 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.244686 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.346361 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.346540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6gjw\" (UniqueName: \"kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.346577 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.346644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.346704 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.347083 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.354007 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.356342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.356440 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.376338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6gjw\" (UniqueName: \"kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.454979 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.933601 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.992447 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerStarted","Data":"5e6e02f1496d3aef42069ee14f55f52e9e747e69dc4c7555c717e2f6f10e625d"} Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.273174 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.273523 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.494975 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.495024 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.557250 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.586483 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.700612 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.757131 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.757409 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="dnsmasq-dns" containerID="cri-o://bcea766c958dc0049c65ebd81f7c4fc80c8c997206175e767632b67a5ef03c71" gracePeriod=10 Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.808136 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" path="/var/lib/kubelet/pods/0102143e-dd8e-417e-aaa4-ed1567d5b471/volumes" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.004285 4739 generic.go:334] "Generic (PLEG): container finished" podID="5091d434-2266-4386-a1b1-ce00719cd889" containerID="bcea766c958dc0049c65ebd81f7c4fc80c8c997206175e767632b67a5ef03c71" exitCode=0 Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.004362 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" event={"ID":"5091d434-2266-4386-a1b1-ce00719cd889","Type":"ContainerDied","Data":"bcea766c958dc0049c65ebd81f7c4fc80c8c997206175e767632b67a5ef03c71"} Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.006510 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerStarted","Data":"59ec3ea167e1bd84962626a53284ba2c98ba497535fdfc6afbe4fa2596687c71"} Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.006548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerStarted","Data":"01b92e4a433b862baed29a7ceed19b0293e6126c73e5f39c75359cffb47426e1"} Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.039343 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.039324383 podStartE2EDuration="2.039324383s" podCreationTimestamp="2026-01-21 15:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:17.034129491 +0000 UTC m=+1448.724835755" watchObservedRunningTime="2026-01-21 15:50:17.039324383 +0000 UTC m=+1448.730030647" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.065046 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.359029 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.359035 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.526443 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.696557 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb\") pod \"5091d434-2266-4386-a1b1-ce00719cd889\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.696801 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc\") pod \"5091d434-2266-4386-a1b1-ce00719cd889\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.696976 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw2w7\" (UniqueName: \"kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7\") pod \"5091d434-2266-4386-a1b1-ce00719cd889\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.697115 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config\") pod \"5091d434-2266-4386-a1b1-ce00719cd889\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.697191 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb\") pod \"5091d434-2266-4386-a1b1-ce00719cd889\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.711119 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7" (OuterVolumeSpecName: "kube-api-access-lw2w7") pod "5091d434-2266-4386-a1b1-ce00719cd889" (UID: "5091d434-2266-4386-a1b1-ce00719cd889"). InnerVolumeSpecName "kube-api-access-lw2w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.742215 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5091d434-2266-4386-a1b1-ce00719cd889" (UID: "5091d434-2266-4386-a1b1-ce00719cd889"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.747208 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config" (OuterVolumeSpecName: "config") pod "5091d434-2266-4386-a1b1-ce00719cd889" (UID: "5091d434-2266-4386-a1b1-ce00719cd889"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.758730 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5091d434-2266-4386-a1b1-ce00719cd889" (UID: "5091d434-2266-4386-a1b1-ce00719cd889"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.764578 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5091d434-2266-4386-a1b1-ce00719cd889" (UID: "5091d434-2266-4386-a1b1-ce00719cd889"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.799307 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.799350 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.799361 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw2w7\" (UniqueName: \"kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.799370 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.799380 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.017438 4739 generic.go:334] "Generic (PLEG): container finished" podID="bee6ce08-4c84-436e-bf6c-78edfd72079e" containerID="5b8179165447cef12f007a52d92471b3add91f61832db6a1bec046d4bb82e28b" exitCode=0 Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.017738 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7jt2b" event={"ID":"bee6ce08-4c84-436e-bf6c-78edfd72079e","Type":"ContainerDied","Data":"5b8179165447cef12f007a52d92471b3add91f61832db6a1bec046d4bb82e28b"} Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.021944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" event={"ID":"5091d434-2266-4386-a1b1-ce00719cd889","Type":"ContainerDied","Data":"e034200d9d2fe17264411387abcf6da9e0fcd72661056799249816cb13df0c87"} Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.022012 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.022018 4739 scope.go:117] "RemoveContainer" containerID="bcea766c958dc0049c65ebd81f7c4fc80c8c997206175e767632b67a5ef03c71" Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.097933 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.105677 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.508179 4739 scope.go:117] "RemoveContainer" containerID="dfe43fc7f1dc6cc96c1db90a080ec794f13e7877032c122bc215992616badebc" Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.794216 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5091d434-2266-4386-a1b1-ce00719cd889" path="/var/lib/kubelet/pods/5091d434-2266-4386-a1b1-ce00719cd889/volumes" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.433829 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.540080 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle\") pod \"bee6ce08-4c84-436e-bf6c-78edfd72079e\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.540237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data\") pod \"bee6ce08-4c84-436e-bf6c-78edfd72079e\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.540259 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftpmj\" (UniqueName: \"kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj\") pod \"bee6ce08-4c84-436e-bf6c-78edfd72079e\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.540331 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts\") pod \"bee6ce08-4c84-436e-bf6c-78edfd72079e\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.545995 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts" (OuterVolumeSpecName: "scripts") pod "bee6ce08-4c84-436e-bf6c-78edfd72079e" (UID: "bee6ce08-4c84-436e-bf6c-78edfd72079e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.549962 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj" (OuterVolumeSpecName: "kube-api-access-ftpmj") pod "bee6ce08-4c84-436e-bf6c-78edfd72079e" (UID: "bee6ce08-4c84-436e-bf6c-78edfd72079e"). InnerVolumeSpecName "kube-api-access-ftpmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.566945 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data" (OuterVolumeSpecName: "config-data") pod "bee6ce08-4c84-436e-bf6c-78edfd72079e" (UID: "bee6ce08-4c84-436e-bf6c-78edfd72079e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.571367 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bee6ce08-4c84-436e-bf6c-78edfd72079e" (UID: "bee6ce08-4c84-436e-bf6c-78edfd72079e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.641863 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.641896 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.641909 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.641920 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftpmj\" (UniqueName: \"kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.045195 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7jt2b" event={"ID":"bee6ce08-4c84-436e-bf6c-78edfd72079e","Type":"ContainerDied","Data":"cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf"} Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.045544 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf" Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.045241 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.236069 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.236639 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-log" containerID="cri-o://ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1" gracePeriod=30 Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.236783 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-api" containerID="cri-o://c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26" gracePeriod=30 Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.259900 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.262050 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerName="nova-scheduler-scheduler" containerID="cri-o://6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" gracePeriod=30 Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.286891 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.287158 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-log" containerID="cri-o://01b92e4a433b862baed29a7ceed19b0293e6126c73e5f39c75359cffb47426e1" gracePeriod=30 Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.287520 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-metadata" containerID="cri-o://59ec3ea167e1bd84962626a53284ba2c98ba497535fdfc6afbe4fa2596687c71" gracePeriod=30 Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.455904 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.455952 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.056186 4739 generic.go:334] "Generic (PLEG): container finished" podID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerID="ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1" exitCode=143 Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.056293 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerDied","Data":"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1"} Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.058700 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerID="59ec3ea167e1bd84962626a53284ba2c98ba497535fdfc6afbe4fa2596687c71" exitCode=0 Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.058725 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerID="01b92e4a433b862baed29a7ceed19b0293e6126c73e5f39c75359cffb47426e1" exitCode=143 Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.058744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerDied","Data":"59ec3ea167e1bd84962626a53284ba2c98ba497535fdfc6afbe4fa2596687c71"} Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.058770 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerDied","Data":"01b92e4a433b862baed29a7ceed19b0293e6126c73e5f39c75359cffb47426e1"} Jan 21 15:50:21 crc kubenswrapper[4739]: E0121 15:50:21.481293 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod961aae12_5a2d_4166_a897_1aa496d25ce2.slice/crio-conmon-6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod961aae12_5a2d_4166_a897_1aa496d25ce2.slice/crio-6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:50:21 crc kubenswrapper[4739]: E0121 15:50:21.495239 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e is running failed: container process not found" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:50:21 crc kubenswrapper[4739]: E0121 15:50:21.497166 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e is running failed: container process not found" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.497901 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:21 crc kubenswrapper[4739]: E0121 15:50:21.498544 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e is running failed: container process not found" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:50:21 crc kubenswrapper[4739]: E0121 15:50:21.498648 4739 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerName="nova-scheduler-scheduler" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.678629 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs\") pod \"2a666b78-0181-4f41-8a61-6e55c48a4036\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.678757 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6gjw\" (UniqueName: \"kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw\") pod \"2a666b78-0181-4f41-8a61-6e55c48a4036\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.678880 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle\") pod \"2a666b78-0181-4f41-8a61-6e55c48a4036\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.678917 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs\") pod \"2a666b78-0181-4f41-8a61-6e55c48a4036\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.679027 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data\") pod \"2a666b78-0181-4f41-8a61-6e55c48a4036\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.679059 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs" (OuterVolumeSpecName: "logs") pod "2a666b78-0181-4f41-8a61-6e55c48a4036" (UID: "2a666b78-0181-4f41-8a61-6e55c48a4036"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.679443 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.686289 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw" (OuterVolumeSpecName: "kube-api-access-g6gjw") pod "2a666b78-0181-4f41-8a61-6e55c48a4036" (UID: "2a666b78-0181-4f41-8a61-6e55c48a4036"). InnerVolumeSpecName "kube-api-access-g6gjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.707252 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a666b78-0181-4f41-8a61-6e55c48a4036" (UID: "2a666b78-0181-4f41-8a61-6e55c48a4036"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.713706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data" (OuterVolumeSpecName: "config-data") pod "2a666b78-0181-4f41-8a61-6e55c48a4036" (UID: "2a666b78-0181-4f41-8a61-6e55c48a4036"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.744987 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2a666b78-0181-4f41-8a61-6e55c48a4036" (UID: "2a666b78-0181-4f41-8a61-6e55c48a4036"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.756657 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.781362 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.781394 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6gjw\" (UniqueName: \"kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.781408 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.781416 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.883145 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle\") pod \"961aae12-5a2d-4166-a897-1aa496d25ce2\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.883350 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data\") pod \"961aae12-5a2d-4166-a897-1aa496d25ce2\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.883515 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq\") pod \"961aae12-5a2d-4166-a897-1aa496d25ce2\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.887254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq" (OuterVolumeSpecName: "kube-api-access-5gsnq") pod "961aae12-5a2d-4166-a897-1aa496d25ce2" (UID: "961aae12-5a2d-4166-a897-1aa496d25ce2"). InnerVolumeSpecName "kube-api-access-5gsnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.911902 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "961aae12-5a2d-4166-a897-1aa496d25ce2" (UID: "961aae12-5a2d-4166-a897-1aa496d25ce2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.912535 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data" (OuterVolumeSpecName: "config-data") pod "961aae12-5a2d-4166-a897-1aa496d25ce2" (UID: "961aae12-5a2d-4166-a897-1aa496d25ce2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.986659 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.986708 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.986720 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.068639 4739 generic.go:334] "Generic (PLEG): container finished" podID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" exitCode=0 Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.068695 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.068689 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"961aae12-5a2d-4166-a897-1aa496d25ce2","Type":"ContainerDied","Data":"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e"} Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.068747 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"961aae12-5a2d-4166-a897-1aa496d25ce2","Type":"ContainerDied","Data":"a5d6ca0e09184dd575178e9f566e5c10ecc3f8a3b718b6cc7ba6599515b2f0fb"} Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.068766 4739 scope.go:117] "RemoveContainer" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.071430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerDied","Data":"5e6e02f1496d3aef42069ee14f55f52e9e747e69dc4c7555c717e2f6f10e625d"} Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.071483 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.106644 4739 scope.go:117] "RemoveContainer" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.107513 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e\": container with ID starting with 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e not found: ID does not exist" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.107619 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e"} err="failed to get container status \"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e\": rpc error: code = NotFound desc = could not find container \"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e\": container with ID starting with 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e not found: ID does not exist" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.113868 4739 scope.go:117] "RemoveContainer" containerID="59ec3ea167e1bd84962626a53284ba2c98ba497535fdfc6afbe4fa2596687c71" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.137786 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.160211 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.170460 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.170494 4739 scope.go:117] "RemoveContainer" containerID="01b92e4a433b862baed29a7ceed19b0293e6126c73e5f39c75359cffb47426e1" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.185496 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.185983 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="dnsmasq-dns" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186003 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="dnsmasq-dns" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.186012 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-log" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186018 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-log" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.186036 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerName="nova-scheduler-scheduler" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186043 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerName="nova-scheduler-scheduler" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.186075 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee6ce08-4c84-436e-bf6c-78edfd72079e" containerName="nova-manage" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186080 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee6ce08-4c84-436e-bf6c-78edfd72079e" containerName="nova-manage" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.186091 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="init" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186097 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="init" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.186107 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-metadata" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186113 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-metadata" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186292 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-metadata" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186331 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee6ce08-4c84-436e-bf6c-78edfd72079e" containerName="nova-manage" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186359 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="dnsmasq-dns" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186380 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerName="nova-scheduler-scheduler" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186403 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-log" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.187041 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.198123 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.201003 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.218183 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.219768 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.223012 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.223311 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.247478 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.271426 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.298413 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.299067 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cmjn\" (UniqueName: \"kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.299243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.401666 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.402012 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.402835 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cmjn\" (UniqueName: \"kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.402978 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.403067 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9zd2\" (UniqueName: \"kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.403168 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.403335 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.403411 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.407081 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.408477 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.423406 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cmjn\" (UniqueName: \"kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.505867 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.505991 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9zd2\" (UniqueName: \"kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.506026 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.506070 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.506095 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.506561 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.508631 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.510329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.511057 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.514915 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.527972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9zd2\" (UniqueName: \"kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.544934 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.801900 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" path="/var/lib/kubelet/pods/2a666b78-0181-4f41-8a61-6e55c48a4036/volumes" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.803008 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" path="/var/lib/kubelet/pods/961aae12-5a2d-4166-a897-1aa496d25ce2/volumes" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.996556 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:23 crc kubenswrapper[4739]: I0121 15:50:23.081242 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75061282-4db0-4380-9b45-0ed8428033ae","Type":"ContainerStarted","Data":"beda81d6da457712fe5c401d53b87cfc884dc8cafe3280da9942bc39ff45cd46"} Jan 21 15:50:23 crc kubenswrapper[4739]: I0121 15:50:23.134538 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:23 crc kubenswrapper[4739]: W0121 15:50:23.138751 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5597c9e8_b443_4188_be2b_e01fb486489b.slice/crio-95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d WatchSource:0}: Error finding container 95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d: Status 404 returned error can't find the container with id 95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.045188 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.092157 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerStarted","Data":"418872e78d0be96d75bdb10081118e4656d854a9e567d1e5ceebedc138e05830"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.092204 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerStarted","Data":"e07f8d37aea6da4ada3cd9a853c51d272848fc36e109cf56f13b4afa66174819"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.092215 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerStarted","Data":"95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.094471 4739 generic.go:334] "Generic (PLEG): container finished" podID="a5fdc51e-5890-4f55-8693-275865a73e2a" containerID="4798236393baf528c0c4993b5af62d7ba7d89ae6096c4966bb99e447397af0a0" exitCode=0 Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.094514 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" event={"ID":"a5fdc51e-5890-4f55-8693-275865a73e2a","Type":"ContainerDied","Data":"4798236393baf528c0c4993b5af62d7ba7d89ae6096c4966bb99e447397af0a0"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.097310 4739 generic.go:334] "Generic (PLEG): container finished" podID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerID="c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26" exitCode=0 Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.097394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerDied","Data":"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.097422 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerDied","Data":"b7f3f2c8839db57ca9ea84ab093ba98b849f20cd54f510f023a4d74cdb39800e"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.097440 4739 scope.go:117] "RemoveContainer" containerID="c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.097470 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.103139 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75061282-4db0-4380-9b45-0ed8428033ae","Type":"ContainerStarted","Data":"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.131935 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.131909704 podStartE2EDuration="2.131909704s" podCreationTimestamp="2026-01-21 15:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:24.123340919 +0000 UTC m=+1455.814047183" watchObservedRunningTime="2026-01-21 15:50:24.131909704 +0000 UTC m=+1455.822615978" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.133042 4739 scope.go:117] "RemoveContainer" containerID="ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.135799 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs\") pod \"b36584f8-8253-4782-a5e2-7cd154ce0048\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.136705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs" (OuterVolumeSpecName: "logs") pod "b36584f8-8253-4782-a5e2-7cd154ce0048" (UID: "b36584f8-8253-4782-a5e2-7cd154ce0048"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.163908 4739 scope.go:117] "RemoveContainer" containerID="c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26" Jan 21 15:50:24 crc kubenswrapper[4739]: E0121 15:50:24.172067 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26\": container with ID starting with c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26 not found: ID does not exist" containerID="c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.172108 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26"} err="failed to get container status \"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26\": rpc error: code = NotFound desc = could not find container \"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26\": container with ID starting with c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26 not found: ID does not exist" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.172135 4739 scope.go:117] "RemoveContainer" containerID="ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.174045 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.174021927 podStartE2EDuration="2.174021927s" podCreationTimestamp="2026-01-21 15:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:24.164054494 +0000 UTC m=+1455.854760758" watchObservedRunningTime="2026-01-21 15:50:24.174021927 +0000 UTC m=+1455.864728191" Jan 21 15:50:24 crc kubenswrapper[4739]: E0121 15:50:24.174300 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1\": container with ID starting with ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1 not found: ID does not exist" containerID="ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.174336 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1"} err="failed to get container status \"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1\": rpc error: code = NotFound desc = could not find container \"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1\": container with ID starting with ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1 not found: ID does not exist" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.237732 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle\") pod \"b36584f8-8253-4782-a5e2-7cd154ce0048\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.237893 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b982m\" (UniqueName: \"kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m\") pod \"b36584f8-8253-4782-a5e2-7cd154ce0048\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.238088 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data\") pod \"b36584f8-8253-4782-a5e2-7cd154ce0048\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.238600 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.272146 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m" (OuterVolumeSpecName: "kube-api-access-b982m") pod "b36584f8-8253-4782-a5e2-7cd154ce0048" (UID: "b36584f8-8253-4782-a5e2-7cd154ce0048"). InnerVolumeSpecName "kube-api-access-b982m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.294414 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data" (OuterVolumeSpecName: "config-data") pod "b36584f8-8253-4782-a5e2-7cd154ce0048" (UID: "b36584f8-8253-4782-a5e2-7cd154ce0048"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.300380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b36584f8-8253-4782-a5e2-7cd154ce0048" (UID: "b36584f8-8253-4782-a5e2-7cd154ce0048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.340465 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.340727 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b982m\" (UniqueName: \"kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.340863 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.435952 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.448132 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.471768 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:24 crc kubenswrapper[4739]: E0121 15:50:24.472193 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-api" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.472215 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-api" Jan 21 15:50:24 crc kubenswrapper[4739]: E0121 15:50:24.472226 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-log" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.472234 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-log" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.472420 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-log" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.472461 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-api" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.474181 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.476214 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.489168 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.544102 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hzz\" (UniqueName: \"kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.544230 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.544268 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.544309 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.644887 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.644945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.645022 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.645071 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9hzz\" (UniqueName: \"kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.645856 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.648690 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.648720 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.661542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9hzz\" (UniqueName: \"kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.790226 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.800557 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" path="/var/lib/kubelet/pods/b36584f8-8253-4782-a5e2-7cd154ce0048/volumes" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.194213 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.448054 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.643290 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.688025 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle\") pod \"a5fdc51e-5890-4f55-8693-275865a73e2a\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.688132 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data\") pod \"a5fdc51e-5890-4f55-8693-275865a73e2a\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.688174 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcffr\" (UniqueName: \"kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr\") pod \"a5fdc51e-5890-4f55-8693-275865a73e2a\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.688333 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts\") pod \"a5fdc51e-5890-4f55-8693-275865a73e2a\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.697577 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr" (OuterVolumeSpecName: "kube-api-access-pcffr") pod "a5fdc51e-5890-4f55-8693-275865a73e2a" (UID: "a5fdc51e-5890-4f55-8693-275865a73e2a"). InnerVolumeSpecName "kube-api-access-pcffr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.702978 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts" (OuterVolumeSpecName: "scripts") pod "a5fdc51e-5890-4f55-8693-275865a73e2a" (UID: "a5fdc51e-5890-4f55-8693-275865a73e2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.721406 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5fdc51e-5890-4f55-8693-275865a73e2a" (UID: "a5fdc51e-5890-4f55-8693-275865a73e2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.735177 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data" (OuterVolumeSpecName: "config-data") pod "a5fdc51e-5890-4f55-8693-275865a73e2a" (UID: "a5fdc51e-5890-4f55-8693-275865a73e2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.790432 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.790481 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.790499 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcffr\" (UniqueName: \"kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.790514 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.238474 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerStarted","Data":"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725"} Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.238545 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerStarted","Data":"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220"} Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.238562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerStarted","Data":"d7c60937945a51166530d318bb4205d3b87a860bdee1a6c766190c05f9bfff35"} Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.240516 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" event={"ID":"a5fdc51e-5890-4f55-8693-275865a73e2a","Type":"ContainerDied","Data":"a70dedce532492d42f780d135e8fa508d4b75bf2ce7c6594aee874115e104f13"} Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.240568 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70dedce532492d42f780d135e8fa508d4b75bf2ce7c6594aee874115e104f13" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.240664 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.283119 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:50:26 crc kubenswrapper[4739]: E0121 15:50:26.283681 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fdc51e-5890-4f55-8693-275865a73e2a" containerName="nova-cell1-conductor-db-sync" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.283725 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fdc51e-5890-4f55-8693-275865a73e2a" containerName="nova-cell1-conductor-db-sync" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.284566 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5fdc51e-5890-4f55-8693-275865a73e2a" containerName="nova-cell1-conductor-db-sync" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.285390 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.289408 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.317043 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.31702206 podStartE2EDuration="2.31702206s" podCreationTimestamp="2026-01-21 15:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:26.275287717 +0000 UTC m=+1457.965993981" watchObservedRunningTime="2026-01-21 15:50:26.31702206 +0000 UTC m=+1458.007728324" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.317247 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.402716 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.403077 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtl8g\" (UniqueName: \"kubernetes.io/projected/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-kube-api-access-dtl8g\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.403123 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.504788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.505004 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.505090 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtl8g\" (UniqueName: \"kubernetes.io/projected/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-kube-api-access-dtl8g\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.511463 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.514270 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.523805 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtl8g\" (UniqueName: \"kubernetes.io/projected/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-kube-api-access-dtl8g\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.622321 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:27 crc kubenswrapper[4739]: I0121 15:50:27.084270 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:50:27 crc kubenswrapper[4739]: W0121 15:50:27.087279 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05cfdc9a_d9ef_45eb_99dd_a7393fdca241.slice/crio-5d453993caa42a28845ace07de5685bafd137cb3ea553a2f3e4dc6d870f1a173 WatchSource:0}: Error finding container 5d453993caa42a28845ace07de5685bafd137cb3ea553a2f3e4dc6d870f1a173: Status 404 returned error can't find the container with id 5d453993caa42a28845ace07de5685bafd137cb3ea553a2f3e4dc6d870f1a173 Jan 21 15:50:27 crc kubenswrapper[4739]: I0121 15:50:27.256694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"05cfdc9a-d9ef-45eb-99dd-a7393fdca241","Type":"ContainerStarted","Data":"5d453993caa42a28845ace07de5685bafd137cb3ea553a2f3e4dc6d870f1a173"} Jan 21 15:50:27 crc kubenswrapper[4739]: I0121 15:50:27.508713 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 15:50:27 crc kubenswrapper[4739]: I0121 15:50:27.545942 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:50:27 crc kubenswrapper[4739]: I0121 15:50:27.547064 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:50:28 crc kubenswrapper[4739]: I0121 15:50:28.267510 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"05cfdc9a-d9ef-45eb-99dd-a7393fdca241","Type":"ContainerStarted","Data":"00f806033d224e48cdbd142b91747eb04144f7604c25983a91ae6b5b045cd82c"} Jan 21 15:50:28 crc kubenswrapper[4739]: I0121 15:50:28.268535 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:28 crc kubenswrapper[4739]: I0121 15:50:28.287310 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.287287643 podStartE2EDuration="2.287287643s" podCreationTimestamp="2026-01-21 15:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:28.285297549 +0000 UTC m=+1459.976003843" watchObservedRunningTime="2026-01-21 15:50:28.287287643 +0000 UTC m=+1459.977993907" Jan 21 15:50:32 crc kubenswrapper[4739]: I0121 15:50:32.509508 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 15:50:32 crc kubenswrapper[4739]: I0121 15:50:32.538408 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 15:50:32 crc kubenswrapper[4739]: I0121 15:50:32.546070 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:50:32 crc kubenswrapper[4739]: I0121 15:50:32.546130 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:50:33 crc kubenswrapper[4739]: I0121 15:50:33.339158 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 15:50:33 crc kubenswrapper[4739]: I0121 15:50:33.558114 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:33 crc kubenswrapper[4739]: I0121 15:50:33.558234 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:34 crc kubenswrapper[4739]: I0121 15:50:34.793873 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:50:34 crc kubenswrapper[4739]: I0121 15:50:34.794454 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:50:35 crc kubenswrapper[4739]: I0121 15:50:35.873247 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:35 crc kubenswrapper[4739]: I0121 15:50:35.873485 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:36 crc kubenswrapper[4739]: I0121 15:50:36.648492 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:42 crc kubenswrapper[4739]: I0121 15:50:42.553097 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:50:42 crc kubenswrapper[4739]: I0121 15:50:42.554087 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:50:42 crc kubenswrapper[4739]: I0121 15:50:42.561913 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:50:43 crc kubenswrapper[4739]: I0121 15:50:43.419423 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.412895 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.422853 4739 generic.go:334] "Generic (PLEG): container finished" podID="1782a09d-e578-4628-bff0-c745b8fc5b33" containerID="5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be" exitCode=137 Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.422937 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1782a09d-e578-4628-bff0-c745b8fc5b33","Type":"ContainerDied","Data":"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be"} Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.422978 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1782a09d-e578-4628-bff0-c745b8fc5b33","Type":"ContainerDied","Data":"dd119fb8c085ad74cdde916029bf058ec070273c83f1f37068667b12423f7bc9"} Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.422996 4739 scope.go:117] "RemoveContainer" containerID="5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.423094 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.454664 4739 scope.go:117] "RemoveContainer" containerID="5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be" Jan 21 15:50:44 crc kubenswrapper[4739]: E0121 15:50:44.455799 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be\": container with ID starting with 5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be not found: ID does not exist" containerID="5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.455953 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be"} err="failed to get container status \"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be\": rpc error: code = NotFound desc = could not find container \"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be\": container with ID starting with 5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be not found: ID does not exist" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.536390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmgsk\" (UniqueName: \"kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk\") pod \"1782a09d-e578-4628-bff0-c745b8fc5b33\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.536689 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data\") pod \"1782a09d-e578-4628-bff0-c745b8fc5b33\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.536848 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle\") pod \"1782a09d-e578-4628-bff0-c745b8fc5b33\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.546130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk" (OuterVolumeSpecName: "kube-api-access-kmgsk") pod "1782a09d-e578-4628-bff0-c745b8fc5b33" (UID: "1782a09d-e578-4628-bff0-c745b8fc5b33"). InnerVolumeSpecName "kube-api-access-kmgsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.566229 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1782a09d-e578-4628-bff0-c745b8fc5b33" (UID: "1782a09d-e578-4628-bff0-c745b8fc5b33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.570741 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data" (OuterVolumeSpecName: "config-data") pod "1782a09d-e578-4628-bff0-c745b8fc5b33" (UID: "1782a09d-e578-4628-bff0-c745b8fc5b33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.638849 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.638891 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.638906 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmgsk\" (UniqueName: \"kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.757319 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.766796 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.794288 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1782a09d-e578-4628-bff0-c745b8fc5b33" path="/var/lib/kubelet/pods/1782a09d-e578-4628-bff0-c745b8fc5b33/volumes" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.795695 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:44 crc kubenswrapper[4739]: E0121 15:50:44.800973 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1782a09d-e578-4628-bff0-c745b8fc5b33" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.801002 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1782a09d-e578-4628-bff0-c745b8fc5b33" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.801294 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1782a09d-e578-4628-bff0-c745b8fc5b33" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.807284 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.809477 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814641 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814847 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814931 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814999 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.825852 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.944117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.944212 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.944275 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l567\" (UniqueName: \"kubernetes.io/projected/52afdd4f-bb93-4cc6-b074-7391852099ee-kube-api-access-2l567\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.944349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.944414 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.046406 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.046500 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.046550 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l567\" (UniqueName: \"kubernetes.io/projected/52afdd4f-bb93-4cc6-b074-7391852099ee-kube-api-access-2l567\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.046603 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.046708 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.051633 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.051620 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.052564 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.052695 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.064249 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l567\" (UniqueName: \"kubernetes.io/projected/52afdd4f-bb93-4cc6-b074-7391852099ee-kube-api-access-2l567\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.139681 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.433715 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.438462 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.618593 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.621170 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.677587 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.694193 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.762510 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqrsc\" (UniqueName: \"kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.762585 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.762608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.762686 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.762879 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.865938 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqrsc\" (UniqueName: \"kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.868629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.868659 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.870155 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.870367 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.872266 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.872771 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.874344 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.879870 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.889549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqrsc\" (UniqueName: \"kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.958196 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:46 crc kubenswrapper[4739]: I0121 15:50:46.447037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52afdd4f-bb93-4cc6-b074-7391852099ee","Type":"ContainerStarted","Data":"0acdb1d36abc85e88970f31bd0ad412405d9310cad5a753684f639c6926e551f"} Jan 21 15:50:46 crc kubenswrapper[4739]: I0121 15:50:46.447491 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52afdd4f-bb93-4cc6-b074-7391852099ee","Type":"ContainerStarted","Data":"f12c068910afc23d821c5719c9288e530400c8e7ac49b7e22f4de4f36f32606d"} Jan 21 15:50:46 crc kubenswrapper[4739]: I0121 15:50:46.479029 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.478992828 podStartE2EDuration="2.478992828s" podCreationTimestamp="2026-01-21 15:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:46.472754646 +0000 UTC m=+1478.163460910" watchObservedRunningTime="2026-01-21 15:50:46.478992828 +0000 UTC m=+1478.169699102" Jan 21 15:50:46 crc kubenswrapper[4739]: I0121 15:50:46.532505 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:50:46 crc kubenswrapper[4739]: W0121 15:50:46.546668 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac0420ff_cde9_4c4c_962a_ac17b202c464.slice/crio-e65378337dcd3c38499ff1fbfaf8625a7df13d3ddd68c2a9c27a0aa444ae5bb1 WatchSource:0}: Error finding container e65378337dcd3c38499ff1fbfaf8625a7df13d3ddd68c2a9c27a0aa444ae5bb1: Status 404 returned error can't find the container with id e65378337dcd3c38499ff1fbfaf8625a7df13d3ddd68c2a9c27a0aa444ae5bb1 Jan 21 15:50:47 crc kubenswrapper[4739]: I0121 15:50:47.474667 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerID="35d47c7267aa8cc8159c0480b70e21a1401412a18112ef07ae7b4c5fb230f812" exitCode=0 Jan 21 15:50:47 crc kubenswrapper[4739]: I0121 15:50:47.474723 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" event={"ID":"ac0420ff-cde9-4c4c-962a-ac17b202c464","Type":"ContainerDied","Data":"35d47c7267aa8cc8159c0480b70e21a1401412a18112ef07ae7b4c5fb230f812"} Jan 21 15:50:47 crc kubenswrapper[4739]: I0121 15:50:47.475083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" event={"ID":"ac0420ff-cde9-4c4c-962a-ac17b202c464","Type":"ContainerStarted","Data":"e65378337dcd3c38499ff1fbfaf8625a7df13d3ddd68c2a9c27a0aa444ae5bb1"} Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.485662 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" event={"ID":"ac0420ff-cde9-4c4c-962a-ac17b202c464","Type":"ContainerStarted","Data":"711eb8f49973f8152061fe666bcde1b118422008db7d214584646d3fe5e6cde9"} Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.487516 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.510580 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.510840 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-log" containerID="cri-o://1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220" gracePeriod=30 Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.510949 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-api" containerID="cri-o://8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725" gracePeriod=30 Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.524549 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" podStartSLOduration=3.524525882 podStartE2EDuration="3.524525882s" podCreationTimestamp="2026-01-21 15:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:48.522012633 +0000 UTC m=+1480.212718907" watchObservedRunningTime="2026-01-21 15:50:48.524525882 +0000 UTC m=+1480.215232146" Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.201465 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.201846 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="proxy-httpd" containerID="cri-o://2adbf1319e38888304527bb70bd138dbce0a356cfc2492346e7127e6dca73073" gracePeriod=30 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.201875 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="sg-core" containerID="cri-o://f1c8825e4749e739931f3583d3e8296636e6ef0e0797e70c4e11452d270976d1" gracePeriod=30 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.201805 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-central-agent" containerID="cri-o://b22817c8a5cb39cc6763571d607f1c923d6dabbc5658d4b2464e2fc924d6f575" gracePeriod=30 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.202001 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-notification-agent" containerID="cri-o://f15d3576665e6705dd2ba9cc17c9d91faba9cc3c04fed079c630fcf4e96bfe39" gracePeriod=30 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.494752 4739 generic.go:334] "Generic (PLEG): container finished" podID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerID="1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220" exitCode=143 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.495014 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerDied","Data":"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220"} Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.497336 4739 generic.go:334] "Generic (PLEG): container finished" podID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerID="2adbf1319e38888304527bb70bd138dbce0a356cfc2492346e7127e6dca73073" exitCode=0 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.497355 4739 generic.go:334] "Generic (PLEG): container finished" podID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerID="f1c8825e4749e739931f3583d3e8296636e6ef0e0797e70c4e11452d270976d1" exitCode=2 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.498106 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerDied","Data":"2adbf1319e38888304527bb70bd138dbce0a356cfc2492346e7127e6dca73073"} Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.498129 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerDied","Data":"f1c8825e4749e739931f3583d3e8296636e6ef0e0797e70c4e11452d270976d1"} Jan 21 15:50:50 crc kubenswrapper[4739]: I0121 15:50:50.140906 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:50 crc kubenswrapper[4739]: I0121 15:50:50.509107 4739 generic.go:334] "Generic (PLEG): container finished" podID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerID="b22817c8a5cb39cc6763571d607f1c923d6dabbc5658d4b2464e2fc924d6f575" exitCode=0 Jan 21 15:50:50 crc kubenswrapper[4739]: I0121 15:50:50.509159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerDied","Data":"b22817c8a5cb39cc6763571d607f1c923d6dabbc5658d4b2464e2fc924d6f575"} Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.520593 4739 generic.go:334] "Generic (PLEG): container finished" podID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerID="f15d3576665e6705dd2ba9cc17c9d91faba9cc3c04fed079c630fcf4e96bfe39" exitCode=0 Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.520637 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerDied","Data":"f15d3576665e6705dd2ba9cc17c9d91faba9cc3c04fed079c630fcf4e96bfe39"} Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.738555 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889401 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889549 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889609 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889653 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889698 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889731 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjvlv\" (UniqueName: \"kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889757 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889810 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.890504 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.890861 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.895939 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts" (OuterVolumeSpecName: "scripts") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.898997 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv" (OuterVolumeSpecName: "kube-api-access-tjvlv") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "kube-api-access-tjvlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.922642 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.991488 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.991532 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjvlv\" (UniqueName: \"kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.991545 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.991557 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.991568 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.028029 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.042340 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data" (OuterVolumeSpecName: "config-data") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.074472 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.092911 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.092978 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.092994 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.228938 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb622bd61_6047_41a6_b6ef_d687e8973df6.slice/crio-8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb622bd61_6047_41a6_b6ef_d687e8973df6.slice/crio-conmon-8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.459765 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.534413 4739 generic.go:334] "Generic (PLEG): container finished" podID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerID="8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725" exitCode=0 Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.534518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerDied","Data":"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725"} Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.534521 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.534556 4739 scope.go:117] "RemoveContainer" containerID="8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.534546 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerDied","Data":"d7c60937945a51166530d318bb4205d3b87a860bdee1a6c766190c05f9bfff35"} Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.550378 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerDied","Data":"e77898541118cfa971f128dff0eb382e3a341312cf058739a5aae30d4d0aa454"} Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.550449 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.587459 4739 scope.go:117] "RemoveContainer" containerID="1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.596250 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.611745 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.620214 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs\") pod \"b622bd61-6047-41a6-b6ef-d687e8973df6\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.620641 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs" (OuterVolumeSpecName: "logs") pod "b622bd61-6047-41a6-b6ef-d687e8973df6" (UID: "b622bd61-6047-41a6-b6ef-d687e8973df6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.620707 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle\") pod \"b622bd61-6047-41a6-b6ef-d687e8973df6\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.621396 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data\") pod \"b622bd61-6047-41a6-b6ef-d687e8973df6\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.621463 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9hzz\" (UniqueName: \"kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz\") pod \"b622bd61-6047-41a6-b6ef-d687e8973df6\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.621827 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.630844 4739 scope.go:117] "RemoveContainer" containerID="8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.637614 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725\": container with ID starting with 8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725 not found: ID does not exist" containerID="8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.637651 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725"} err="failed to get container status \"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725\": rpc error: code = NotFound desc = could not find container \"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725\": container with ID starting with 8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725 not found: ID does not exist" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.637676 4739 scope.go:117] "RemoveContainer" containerID="1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.641209 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220\": container with ID starting with 1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220 not found: ID does not exist" containerID="1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.641250 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220"} err="failed to get container status \"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220\": rpc error: code = NotFound desc = could not find container \"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220\": container with ID starting with 1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220 not found: ID does not exist" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.641275 4739 scope.go:117] "RemoveContainer" containerID="2adbf1319e38888304527bb70bd138dbce0a356cfc2492346e7127e6dca73073" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.643230 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz" (OuterVolumeSpecName: "kube-api-access-j9hzz") pod "b622bd61-6047-41a6-b6ef-d687e8973df6" (UID: "b622bd61-6047-41a6-b6ef-d687e8973df6"). InnerVolumeSpecName "kube-api-access-j9hzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.651196 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652753 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="proxy-httpd" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652770 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="proxy-httpd" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652784 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-log" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652791 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-log" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652803 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-notification-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652849 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-notification-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652874 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-api" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652882 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-api" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652893 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="sg-core" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652900 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="sg-core" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652920 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-central-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652927 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-central-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653144 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-central-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653167 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="proxy-httpd" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653179 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-notification-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653192 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-api" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653207 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-log" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653219 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="sg-core" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.657633 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data" (OuterVolumeSpecName: "config-data") pod "b622bd61-6047-41a6-b6ef-d687e8973df6" (UID: "b622bd61-6047-41a6-b6ef-d687e8973df6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.659727 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.668573 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.668807 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.669012 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.682467 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.686133 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b622bd61-6047-41a6-b6ef-d687e8973df6" (UID: "b622bd61-6047-41a6-b6ef-d687e8973df6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.724016 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9hzz\" (UniqueName: \"kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.724058 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.724071 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.769433 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.774311 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-m646v log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="5bba42f1-04c1-42b8-a64b-3d5c35083322" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.776335 4739 scope.go:117] "RemoveContainer" containerID="f1c8825e4749e739931f3583d3e8296636e6ef0e0797e70c4e11452d270976d1" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.801945 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" path="/var/lib/kubelet/pods/0ee4add2-be9f-4b5d-8199-74b9b0376900/volumes" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.819005 4739 scope.go:117] "RemoveContainer" containerID="f15d3576665e6705dd2ba9cc17c9d91faba9cc3c04fed079c630fcf4e96bfe39" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825707 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825764 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825854 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825938 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m646v\" (UniqueName: \"kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825987 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.826005 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.826024 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.846770 4739 scope.go:117] "RemoveContainer" containerID="b22817c8a5cb39cc6763571d607f1c923d6dabbc5658d4b2464e2fc924d6f575" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.873263 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.880851 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.910955 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.912972 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.918013 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.918165 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.918300 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.922048 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.927897 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.927965 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928110 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928173 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m646v\" (UniqueName: \"kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928286 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928321 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928347 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.933235 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.933269 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.938476 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.940953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.941386 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.947358 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.957499 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m646v\" (UniqueName: \"kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.961874 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030380 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030462 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030498 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vksw9\" (UniqueName: \"kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030649 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132348 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132437 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vksw9\" (UniqueName: \"kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.133872 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.137836 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.138700 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.139419 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.140149 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.160655 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vksw9\" (UniqueName: \"kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.261893 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.562928 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.584223 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645545 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m646v\" (UniqueName: \"kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645671 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645703 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645736 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645759 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645863 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645926 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.646016 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.651500 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.654086 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.657044 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data" (OuterVolumeSpecName: "config-data") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.657089 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v" (OuterVolumeSpecName: "kube-api-access-m646v") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "kube-api-access-m646v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.658328 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts" (OuterVolumeSpecName: "scripts") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.660020 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.660461 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.673913 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747709 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747747 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747757 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747766 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m646v\" (UniqueName: \"kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747777 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747786 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747796 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747804 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.810182 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.573556 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerStarted","Data":"156c9d07709459d00e672b3669ff9d0c46be502cddd4de1b98a8477c5e3bc3da"} Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.573879 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerStarted","Data":"58527de531b19a4dbf4661f3d8d9a1406690146116a4c1ae060721b6332bf5ef"} Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.573895 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerStarted","Data":"5779b7f4b1e543277f2439a4720442ab9d977950980917266aad1689a07f13f5"} Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.573570 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.596894 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.596876284 podStartE2EDuration="2.596876284s" podCreationTimestamp="2026-01-21 15:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:54.593009898 +0000 UTC m=+1486.283716162" watchObservedRunningTime="2026-01-21 15:50:54.596876284 +0000 UTC m=+1486.287582548" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.640313 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.649137 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.678960 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.726754 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.726939 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.729714 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.730248 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.734072 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.771983 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772255 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772357 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772440 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772523 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772588 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82r4q\" (UniqueName: \"kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.796658 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bba42f1-04c1-42b8-a64b-3d5c35083322" path="/var/lib/kubelet/pods/5bba42f1-04c1-42b8-a64b-3d5c35083322/volumes" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.797451 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" path="/var/lib/kubelet/pods/b622bd61-6047-41a6-b6ef-d687e8973df6/volumes" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874170 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874398 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874524 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874603 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874702 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874887 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.875011 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.875021 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.875165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82r4q\" (UniqueName: \"kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874723 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.881122 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.881851 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.887839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.888398 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.892978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.895505 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82r4q\" (UniqueName: \"kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.055217 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.141203 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.202728 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.606784 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.664538 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:55 crc kubenswrapper[4739]: W0121 15:50:55.675591 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf78e7dcb_3bf5_471b_a1ff_b70abd7f1925.slice/crio-36aa7880cb3efdd81f077898386b6f0c22b7627de77903bb5ba78e63817f32fc WatchSource:0}: Error finding container 36aa7880cb3efdd81f077898386b6f0c22b7627de77903bb5ba78e63817f32fc: Status 404 returned error can't find the container with id 36aa7880cb3efdd81f077898386b6f0c22b7627de77903bb5ba78e63817f32fc Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.817310 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lksxc"] Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.819058 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.821253 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.821449 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.837057 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lksxc"] Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.897305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6w8s\" (UniqueName: \"kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.897544 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.897582 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.897659 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.960059 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.998906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.999208 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.999905 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.000079 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6w8s\" (UniqueName: \"kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.009448 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.012128 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.012781 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.043530 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6w8s\" (UniqueName: \"kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.048765 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.049099 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="dnsmasq-dns" containerID="cri-o://8321b2eb6ac94c0eb07dfc0f3e625deeb67295a0ad976532397caca096d227dd" gracePeriod=10 Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.150348 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.592549 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerStarted","Data":"36aa7880cb3efdd81f077898386b6f0c22b7627de77903bb5ba78e63817f32fc"} Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.596520 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac8c2262-2594-4058-a243-3d253315507d" containerID="8321b2eb6ac94c0eb07dfc0f3e625deeb67295a0ad976532397caca096d227dd" exitCode=0 Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.597834 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerDied","Data":"8321b2eb6ac94c0eb07dfc0f3e625deeb67295a0ad976532397caca096d227dd"} Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.624911 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lksxc"] Jan 21 15:50:56 crc kubenswrapper[4739]: W0121 15:50:56.631634 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode757d911_c2e0_4498_8b03_1b83fedc6e0e.slice/crio-e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de WatchSource:0}: Error finding container e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de: Status 404 returned error can't find the container with id e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.699870 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.174:5353: connect: connection refused" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.272449 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.344437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc\") pod \"ac8c2262-2594-4058-a243-3d253315507d\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.344511 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config\") pod \"ac8c2262-2594-4058-a243-3d253315507d\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.344591 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7tj9\" (UniqueName: \"kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9\") pod \"ac8c2262-2594-4058-a243-3d253315507d\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.344737 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb\") pod \"ac8c2262-2594-4058-a243-3d253315507d\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.344853 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb\") pod \"ac8c2262-2594-4058-a243-3d253315507d\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.353176 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9" (OuterVolumeSpecName: "kube-api-access-l7tj9") pod "ac8c2262-2594-4058-a243-3d253315507d" (UID: "ac8c2262-2594-4058-a243-3d253315507d"). InnerVolumeSpecName "kube-api-access-l7tj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.418922 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac8c2262-2594-4058-a243-3d253315507d" (UID: "ac8c2262-2594-4058-a243-3d253315507d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.434014 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac8c2262-2594-4058-a243-3d253315507d" (UID: "ac8c2262-2594-4058-a243-3d253315507d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.442270 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac8c2262-2594-4058-a243-3d253315507d" (UID: "ac8c2262-2594-4058-a243-3d253315507d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.446952 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.446975 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7tj9\" (UniqueName: \"kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.446985 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.446993 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.451075 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config" (OuterVolumeSpecName: "config") pod "ac8c2262-2594-4058-a243-3d253315507d" (UID: "ac8c2262-2594-4058-a243-3d253315507d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.548187 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.627552 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerStarted","Data":"876cbddd5fc03b020086847b4d92b2e6d878f8b5e977dd1407bb43ca45f01f19"} Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.630801 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.631618 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerDied","Data":"63ca043f317390f3324ce1e47461c1159ad4e28ca828fd9a4ce2a22f72aaf95e"} Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.631674 4739 scope.go:117] "RemoveContainer" containerID="8321b2eb6ac94c0eb07dfc0f3e625deeb67295a0ad976532397caca096d227dd" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.635067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lksxc" event={"ID":"e757d911-c2e0-4498-8b03-1b83fedc6e0e","Type":"ContainerStarted","Data":"34b39bd33860779b21d637b619f3beb93e3a5f4f2934c1f0596cd6fd4968a14a"} Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.635101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lksxc" event={"ID":"e757d911-c2e0-4498-8b03-1b83fedc6e0e","Type":"ContainerStarted","Data":"e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de"} Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.667062 4739 scope.go:117] "RemoveContainer" containerID="324f31c4acc1b021e278a47ea09ee3464459f5a2b5e3b05d96b40c7e75fa1f9b" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.688322 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lksxc" podStartSLOduration=2.6883029389999997 podStartE2EDuration="2.688302939s" podCreationTimestamp="2026-01-21 15:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:57.659568931 +0000 UTC m=+1489.350275215" watchObservedRunningTime="2026-01-21 15:50:57.688302939 +0000 UTC m=+1489.379009203" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.693266 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.702544 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:58 crc kubenswrapper[4739]: I0121 15:50:58.647494 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerStarted","Data":"e00a1e5cf4a228c6ad77c9cd9bfc25406ae0a248121747af33bae66aea97abc9"} Jan 21 15:50:58 crc kubenswrapper[4739]: I0121 15:50:58.796434 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac8c2262-2594-4058-a243-3d253315507d" path="/var/lib/kubelet/pods/ac8c2262-2594-4058-a243-3d253315507d/volumes" Jan 21 15:50:59 crc kubenswrapper[4739]: I0121 15:50:59.659808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerStarted","Data":"4282a0c29310a59e84c7e358330e258ba173b28bd69c26c905f25c5968f4e355"} Jan 21 15:51:01 crc kubenswrapper[4739]: I0121 15:51:01.683340 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerStarted","Data":"abaf40f5e7ace765139228e6b9ad159379494a1bbf0e44bd88cc9ac3505e055b"} Jan 21 15:51:01 crc kubenswrapper[4739]: I0121 15:51:01.683909 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:51:01 crc kubenswrapper[4739]: I0121 15:51:01.713058 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.600885392 podStartE2EDuration="7.713038691s" podCreationTimestamp="2026-01-21 15:50:54 +0000 UTC" firstStartedPulling="2026-01-21 15:50:55.688398304 +0000 UTC m=+1487.379104568" lastFinishedPulling="2026-01-21 15:51:00.800551593 +0000 UTC m=+1492.491257867" observedRunningTime="2026-01-21 15:51:01.704998176 +0000 UTC m=+1493.395704450" watchObservedRunningTime="2026-01-21 15:51:01.713038691 +0000 UTC m=+1493.403744955" Jan 21 15:51:03 crc kubenswrapper[4739]: I0121 15:51:03.263272 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:51:03 crc kubenswrapper[4739]: I0121 15:51:03.263761 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:51:03 crc kubenswrapper[4739]: I0121 15:51:03.705433 4739 generic.go:334] "Generic (PLEG): container finished" podID="e757d911-c2e0-4498-8b03-1b83fedc6e0e" containerID="34b39bd33860779b21d637b619f3beb93e3a5f4f2934c1f0596cd6fd4968a14a" exitCode=0 Jan 21 15:51:03 crc kubenswrapper[4739]: I0121 15:51:03.705528 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lksxc" event={"ID":"e757d911-c2e0-4498-8b03-1b83fedc6e0e","Type":"ContainerDied","Data":"34b39bd33860779b21d637b619f3beb93e3a5f4f2934c1f0596cd6fd4968a14a"} Jan 21 15:51:04 crc kubenswrapper[4739]: I0121 15:51:04.278136 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:04 crc kubenswrapper[4739]: I0121 15:51:04.278651 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.142185 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.311857 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle\") pod \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.312029 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data\") pod \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.312672 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts\") pod \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.312707 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6w8s\" (UniqueName: \"kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s\") pod \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.319178 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts" (OuterVolumeSpecName: "scripts") pod "e757d911-c2e0-4498-8b03-1b83fedc6e0e" (UID: "e757d911-c2e0-4498-8b03-1b83fedc6e0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.320536 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s" (OuterVolumeSpecName: "kube-api-access-q6w8s") pod "e757d911-c2e0-4498-8b03-1b83fedc6e0e" (UID: "e757d911-c2e0-4498-8b03-1b83fedc6e0e"). InnerVolumeSpecName "kube-api-access-q6w8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.343721 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e757d911-c2e0-4498-8b03-1b83fedc6e0e" (UID: "e757d911-c2e0-4498-8b03-1b83fedc6e0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.358141 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data" (OuterVolumeSpecName: "config-data") pod "e757d911-c2e0-4498-8b03-1b83fedc6e0e" (UID: "e757d911-c2e0-4498-8b03-1b83fedc6e0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.417423 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.417658 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.417731 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.417800 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6w8s\" (UniqueName: \"kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.726907 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lksxc" event={"ID":"e757d911-c2e0-4498-8b03-1b83fedc6e0e","Type":"ContainerDied","Data":"e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de"} Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.726956 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.727028 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.080134 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.080367 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="75061282-4db0-4380-9b45-0ed8428033ae" containerName="nova-scheduler-scheduler" containerID="cri-o://c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" gracePeriod=30 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.090398 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.090630 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-log" containerID="cri-o://58527de531b19a4dbf4661f3d8d9a1406690146116a4c1ae060721b6332bf5ef" gracePeriod=30 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.090781 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-api" containerID="cri-o://156c9d07709459d00e672b3669ff9d0c46be502cddd4de1b98a8477c5e3bc3da" gracePeriod=30 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.099025 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.099250 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" containerID="cri-o://e07f8d37aea6da4ada3cd9a853c51d272848fc36e109cf56f13b4afa66174819" gracePeriod=30 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.099395 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" containerID="cri-o://418872e78d0be96d75bdb10081118e4656d854a9e567d1e5ceebedc138e05830" gracePeriod=30 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.736470 4739 generic.go:334] "Generic (PLEG): container finished" podID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerID="58527de531b19a4dbf4661f3d8d9a1406690146116a4c1ae060721b6332bf5ef" exitCode=143 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.736544 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerDied","Data":"58527de531b19a4dbf4661f3d8d9a1406690146116a4c1ae060721b6332bf5ef"} Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.738326 4739 generic.go:334] "Generic (PLEG): container finished" podID="5597c9e8-b443-4188-be2b-e01fb486489b" containerID="e07f8d37aea6da4ada3cd9a853c51d272848fc36e109cf56f13b4afa66174819" exitCode=143 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.738364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerDied","Data":"e07f8d37aea6da4ada3cd9a853c51d272848fc36e109cf56f13b4afa66174819"} Jan 21 15:51:07 crc kubenswrapper[4739]: E0121 15:51:07.511954 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:51:07 crc kubenswrapper[4739]: E0121 15:51:07.513623 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:51:07 crc kubenswrapper[4739]: E0121 15:51:07.514930 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:51:07 crc kubenswrapper[4739]: E0121 15:51:07.514985 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="75061282-4db0-4380-9b45-0ed8428033ae" containerName="nova-scheduler-scheduler" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.445430 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.574059 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data\") pod \"75061282-4db0-4380-9b45-0ed8428033ae\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.574358 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cmjn\" (UniqueName: \"kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn\") pod \"75061282-4db0-4380-9b45-0ed8428033ae\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.574525 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle\") pod \"75061282-4db0-4380-9b45-0ed8428033ae\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.579357 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn" (OuterVolumeSpecName: "kube-api-access-8cmjn") pod "75061282-4db0-4380-9b45-0ed8428033ae" (UID: "75061282-4db0-4380-9b45-0ed8428033ae"). InnerVolumeSpecName "kube-api-access-8cmjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.603137 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75061282-4db0-4380-9b45-0ed8428033ae" (UID: "75061282-4db0-4380-9b45-0ed8428033ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.605166 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data" (OuterVolumeSpecName: "config-data") pod "75061282-4db0-4380-9b45-0ed8428033ae" (UID: "75061282-4db0-4380-9b45-0ed8428033ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.676291 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.676319 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cmjn\" (UniqueName: \"kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.676382 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.773016 4739 generic.go:334] "Generic (PLEG): container finished" podID="75061282-4db0-4380-9b45-0ed8428033ae" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" exitCode=0 Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.773075 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75061282-4db0-4380-9b45-0ed8428033ae","Type":"ContainerDied","Data":"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042"} Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.773122 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.773539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75061282-4db0-4380-9b45-0ed8428033ae","Type":"ContainerDied","Data":"beda81d6da457712fe5c401d53b87cfc884dc8cafe3280da9942bc39ff45cd46"} Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.773656 4739 scope.go:117] "RemoveContainer" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.803752 4739 scope.go:117] "RemoveContainer" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" Jan 21 15:51:08 crc kubenswrapper[4739]: E0121 15:51:08.804372 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042\": container with ID starting with c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042 not found: ID does not exist" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.804406 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042"} err="failed to get container status \"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042\": rpc error: code = NotFound desc = could not find container \"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042\": container with ID starting with c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042 not found: ID does not exist" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.844596 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.870352 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.884945 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:08 crc kubenswrapper[4739]: E0121 15:51:08.885328 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75061282-4db0-4380-9b45-0ed8428033ae" containerName="nova-scheduler-scheduler" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885344 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="75061282-4db0-4380-9b45-0ed8428033ae" containerName="nova-scheduler-scheduler" Jan 21 15:51:08 crc kubenswrapper[4739]: E0121 15:51:08.885359 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="dnsmasq-dns" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885365 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="dnsmasq-dns" Jan 21 15:51:08 crc kubenswrapper[4739]: E0121 15:51:08.885377 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="init" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885384 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="init" Jan 21 15:51:08 crc kubenswrapper[4739]: E0121 15:51:08.885393 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e757d911-c2e0-4498-8b03-1b83fedc6e0e" containerName="nova-manage" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885398 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e757d911-c2e0-4498-8b03-1b83fedc6e0e" containerName="nova-manage" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885602 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e757d911-c2e0-4498-8b03-1b83fedc6e0e" containerName="nova-manage" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885624 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="75061282-4db0-4380-9b45-0ed8428033ae" containerName="nova-scheduler-scheduler" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885632 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="dnsmasq-dns" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.886210 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.888589 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.909439 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.980258 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-config-data\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.980437 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.980476 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f29h4\" (UniqueName: \"kubernetes.io/projected/a2569778-376b-41fc-bdca-3bb914efd1b1-kube-api-access-f29h4\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.083225 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.083395 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f29h4\" (UniqueName: \"kubernetes.io/projected/a2569778-376b-41fc-bdca-3bb914efd1b1-kube-api-access-f29h4\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.084075 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-config-data\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.098199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-config-data\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.098223 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.105572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f29h4\" (UniqueName: \"kubernetes.io/projected/a2569778-376b-41fc-bdca-3bb914efd1b1-kube-api-access-f29h4\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.209282 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.254596 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:40718->10.217.0.178:8775: read: connection reset by peer" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.255104 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:40720->10.217.0.178:8775: read: connection reset by peer" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.733237 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.796701 4739 generic.go:334] "Generic (PLEG): container finished" podID="5597c9e8-b443-4188-be2b-e01fb486489b" containerID="418872e78d0be96d75bdb10081118e4656d854a9e567d1e5ceebedc138e05830" exitCode=0 Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.796810 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerDied","Data":"418872e78d0be96d75bdb10081118e4656d854a9e567d1e5ceebedc138e05830"} Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.796874 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerDied","Data":"95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d"} Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.796895 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.800195 4739 generic.go:334] "Generic (PLEG): container finished" podID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerID="156c9d07709459d00e672b3669ff9d0c46be502cddd4de1b98a8477c5e3bc3da" exitCode=0 Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.800271 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerDied","Data":"156c9d07709459d00e672b3669ff9d0c46be502cddd4de1b98a8477c5e3bc3da"} Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.804048 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2569778-376b-41fc-bdca-3bb914efd1b1","Type":"ContainerStarted","Data":"6e672bebcc9a594c65fe9905cd1b8e7e28fed3e1671191be87e38acbe556a468"} Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.826683 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.911309 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data\") pod \"5597c9e8-b443-4188-be2b-e01fb486489b\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.911461 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs\") pod \"5597c9e8-b443-4188-be2b-e01fb486489b\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.911628 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle\") pod \"5597c9e8-b443-4188-be2b-e01fb486489b\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.911740 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs\") pod \"5597c9e8-b443-4188-be2b-e01fb486489b\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.911978 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9zd2\" (UniqueName: \"kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2\") pod \"5597c9e8-b443-4188-be2b-e01fb486489b\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.915273 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs" (OuterVolumeSpecName: "logs") pod "5597c9e8-b443-4188-be2b-e01fb486489b" (UID: "5597c9e8-b443-4188-be2b-e01fb486489b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.950602 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2" (OuterVolumeSpecName: "kube-api-access-p9zd2") pod "5597c9e8-b443-4188-be2b-e01fb486489b" (UID: "5597c9e8-b443-4188-be2b-e01fb486489b"). InnerVolumeSpecName "kube-api-access-p9zd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.966010 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5597c9e8-b443-4188-be2b-e01fb486489b" (UID: "5597c9e8-b443-4188-be2b-e01fb486489b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.978094 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data" (OuterVolumeSpecName: "config-data") pod "5597c9e8-b443-4188-be2b-e01fb486489b" (UID: "5597c9e8-b443-4188-be2b-e01fb486489b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.013981 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.014013 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.014023 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.014031 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9zd2\" (UniqueName: \"kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.081127 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5597c9e8-b443-4188-be2b-e01fb486489b" (UID: "5597c9e8-b443-4188-be2b-e01fb486489b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.115864 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.163623 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.217699 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.217884 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.217942 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.217975 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vksw9\" (UniqueName: \"kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.218010 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.218054 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.219504 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs" (OuterVolumeSpecName: "logs") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.221664 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9" (OuterVolumeSpecName: "kube-api-access-vksw9") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "kube-api-access-vksw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.250243 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data" (OuterVolumeSpecName: "config-data") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.255949 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.264967 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.286889 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320707 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320739 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320748 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320756 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320764 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320848 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vksw9\" (UniqueName: \"kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.794161 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75061282-4db0-4380-9b45-0ed8428033ae" path="/var/lib/kubelet/pods/75061282-4db0-4380-9b45-0ed8428033ae/volumes" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.817903 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2569778-376b-41fc-bdca-3bb914efd1b1","Type":"ContainerStarted","Data":"71e822eb0b01c9b48b194bc99e56a9da18006848438c01cd10f109aceea8c6a4"} Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.820575 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.820751 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.822260 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerDied","Data":"5779b7f4b1e543277f2439a4720442ab9d977950980917266aad1689a07f13f5"} Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.822356 4739 scope.go:117] "RemoveContainer" containerID="156c9d07709459d00e672b3669ff9d0c46be502cddd4de1b98a8477c5e3bc3da" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.837891 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.83787227 podStartE2EDuration="2.83787227s" podCreationTimestamp="2026-01-21 15:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:10.835142984 +0000 UTC m=+1502.525849268" watchObservedRunningTime="2026-01-21 15:51:10.83787227 +0000 UTC m=+1502.528578534" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.847182 4739 scope.go:117] "RemoveContainer" containerID="58527de531b19a4dbf4661f3d8d9a1406690146116a4c1ae060721b6332bf5ef" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.864873 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.887770 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.898958 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.927902 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.945317 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: E0121 15:51:10.946055 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946080 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" Jan 21 15:51:10 crc kubenswrapper[4739]: E0121 15:51:10.946104 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-log" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946114 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-log" Jan 21 15:51:10 crc kubenswrapper[4739]: E0121 15:51:10.946129 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-api" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946137 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-api" Jan 21 15:51:10 crc kubenswrapper[4739]: E0121 15:51:10.946184 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946193 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946573 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-api" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946612 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946628 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-log" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946648 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.948377 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.955464 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.955904 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.960487 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.983197 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.994660 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.996396 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.998896 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.999175 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.008410 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.048390 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.048836 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-config-data\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.048933 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm7z9\" (UniqueName: \"kubernetes.io/projected/09a86707-0931-4a2a-961c-6109688ed7e0-kube-api-access-qm7z9\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.049030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a86707-0931-4a2a-961c-6109688ed7e0-logs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.049126 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.049250 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151209 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-config-data\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-logs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151602 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm7z9\" (UniqueName: \"kubernetes.io/projected/09a86707-0931-4a2a-961c-6109688ed7e0-kube-api-access-qm7z9\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151731 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a86707-0931-4a2a-961c-6109688ed7e0-logs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151958 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.152076 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.152269 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.152380 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-config-data\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.152511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.152615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75xc5\" (UniqueName: \"kubernetes.io/projected/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-kube-api-access-75xc5\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.156663 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a86707-0931-4a2a-961c-6109688ed7e0-logs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.158054 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.158760 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-config-data\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.161678 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.168326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.173180 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm7z9\" (UniqueName: \"kubernetes.io/projected/09a86707-0931-4a2a-961c-6109688ed7e0-kube-api-access-qm7z9\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.255316 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-logs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.255528 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.255712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-config-data\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.255887 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.255949 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75xc5\" (UniqueName: \"kubernetes.io/projected/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-kube-api-access-75xc5\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.256225 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-logs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.259264 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-config-data\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.259316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.264385 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.271146 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.274364 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75xc5\" (UniqueName: \"kubernetes.io/projected/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-kube-api-access-75xc5\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.318335 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.738303 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.843900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a86707-0931-4a2a-961c-6109688ed7e0","Type":"ContainerStarted","Data":"0777abae0e30961907d200119da5f2dcab9d22ea6777432f57927856941d733a"} Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.858350 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:11 crc kubenswrapper[4739]: W0121 15:51:11.875617 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89b7cc4f_a58e_429b_b4ed_0f3ea3ebfa06.slice/crio-304f3c1bee4599e289c927a7b9155cdf11495fb73d267577ce24aa2c8154f954 WatchSource:0}: Error finding container 304f3c1bee4599e289c927a7b9155cdf11495fb73d267577ce24aa2c8154f954: Status 404 returned error can't find the container with id 304f3c1bee4599e289c927a7b9155cdf11495fb73d267577ce24aa2c8154f954 Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.797117 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" path="/var/lib/kubelet/pods/3097c3ca-1f70-4262-b5ad-b0d2521e44dd/volumes" Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.798469 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" path="/var/lib/kubelet/pods/5597c9e8-b443-4188-be2b-e01fb486489b/volumes" Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.857437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06","Type":"ContainerStarted","Data":"9c24043c624c6ca64dde9e85954b2152ffa2836de73220273564c9790ed47605"} Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.857511 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06","Type":"ContainerStarted","Data":"e9ff1b687145dc278df3389f2be3103efb5afcf905319f2457c2bb5b8e4aa605"} Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.857529 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06","Type":"ContainerStarted","Data":"304f3c1bee4599e289c927a7b9155cdf11495fb73d267577ce24aa2c8154f954"} Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.860883 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a86707-0931-4a2a-961c-6109688ed7e0","Type":"ContainerStarted","Data":"eaff17c574ea8c2d40f69a18f63bdc6d77389a2c27c5122f75721061076f4662"} Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.860954 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a86707-0931-4a2a-961c-6109688ed7e0","Type":"ContainerStarted","Data":"d501cf8e68026298133c8b4207fcf702ed6bd0c09a7227aa40755cba88ee25ab"} Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.888354 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.888335048 podStartE2EDuration="2.888335048s" podCreationTimestamp="2026-01-21 15:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:12.878470903 +0000 UTC m=+1504.569177167" watchObservedRunningTime="2026-01-21 15:51:12.888335048 +0000 UTC m=+1504.579041302" Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.943269 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.94177248 podStartE2EDuration="2.94177248s" podCreationTimestamp="2026-01-21 15:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:12.911386331 +0000 UTC m=+1504.602092595" watchObservedRunningTime="2026-01-21 15:51:12.94177248 +0000 UTC m=+1504.632478744" Jan 21 15:51:14 crc kubenswrapper[4739]: I0121 15:51:14.209637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 15:51:16 crc kubenswrapper[4739]: I0121 15:51:16.319497 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:51:16 crc kubenswrapper[4739]: I0121 15:51:16.319563 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:51:19 crc kubenswrapper[4739]: I0121 15:51:19.210005 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 15:51:19 crc kubenswrapper[4739]: I0121 15:51:19.243447 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 15:51:19 crc kubenswrapper[4739]: I0121 15:51:19.952247 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 15:51:21 crc kubenswrapper[4739]: I0121 15:51:21.272666 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:51:21 crc kubenswrapper[4739]: I0121 15:51:21.273149 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:51:21 crc kubenswrapper[4739]: I0121 15:51:21.319558 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:51:21 crc kubenswrapper[4739]: I0121 15:51:21.319634 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:51:22 crc kubenswrapper[4739]: I0121 15:51:22.289083 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09a86707-0931-4a2a-961c-6109688ed7e0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:22 crc kubenswrapper[4739]: I0121 15:51:22.289370 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09a86707-0931-4a2a-961c-6109688ed7e0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:22 crc kubenswrapper[4739]: I0121 15:51:22.332970 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:22 crc kubenswrapper[4739]: I0121 15:51:22.333224 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:25 crc kubenswrapper[4739]: I0121 15:51:25.072422 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.285372 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.286129 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.286771 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.287286 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.292614 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.295384 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.330487 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.336272 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.338874 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:51:32 crc kubenswrapper[4739]: I0121 15:51:32.035158 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:51:35 crc kubenswrapper[4739]: I0121 15:51:35.223133 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:51:35 crc kubenswrapper[4739]: I0121 15:51:35.223556 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:51:40 crc kubenswrapper[4739]: I0121 15:51:40.852160 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:41 crc kubenswrapper[4739]: I0121 15:51:41.872696 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:46 crc kubenswrapper[4739]: I0121 15:51:46.640493 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="rabbitmq" containerID="cri-o://aed28c31b2ae94e515277652ec493ccaa087e7eb617da4c14f60d2c4b1f04775" gracePeriod=604795 Jan 21 15:51:46 crc kubenswrapper[4739]: I0121 15:51:46.857126 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="rabbitmq" containerID="cri-o://0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714" gracePeriod=604796 Jan 21 15:51:47 crc kubenswrapper[4739]: I0121 15:51:47.153870 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 21 15:51:47 crc kubenswrapper[4739]: I0121 15:51:47.211983 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 21 15:51:53 crc kubenswrapper[4739]: I0121 15:51:53.227317 4739 generic.go:334] "Generic (PLEG): container finished" podID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerID="aed28c31b2ae94e515277652ec493ccaa087e7eb617da4c14f60d2c4b1f04775" exitCode=0 Jan 21 15:51:53 crc kubenswrapper[4739]: I0121 15:51:53.228477 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerDied","Data":"aed28c31b2ae94e515277652ec493ccaa087e7eb617da4c14f60d2c4b1f04775"} Jan 21 15:51:53 crc kubenswrapper[4739]: E0121 15:51:53.756263 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6800cb6_6e4e_4300_9148_be2e0d2deb6d.slice/crio-conmon-0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.168178 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.287790 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerDied","Data":"4be9ccaff7f44b9922cb3a123f667b6b06795c76e8f74a176cda84687b755499"} Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.287849 4739 scope.go:117] "RemoveContainer" containerID="aed28c31b2ae94e515277652ec493ccaa087e7eb617da4c14f60d2c4b1f04775" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.287984 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.299385 4739 generic.go:334] "Generic (PLEG): container finished" podID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerID="0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714" exitCode=0 Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.299512 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerDied","Data":"0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714"} Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.324380 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.324671 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325066 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325259 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325403 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325538 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325764 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pwwl\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325876 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325984 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.326154 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.326297 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.326592 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.327145 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.327239 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.337359 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.350155 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.358188 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl" (OuterVolumeSpecName: "kube-api-access-8pwwl") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "kube-api-access-8pwwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.358699 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.365057 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info" (OuterVolumeSpecName: "pod-info") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.370433 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.395254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data" (OuterVolumeSpecName: "config-data") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.425588 4739 scope.go:117] "RemoveContainer" containerID="beb9d8f271dffc70001cef409f13acc1edb8c7262a616123e00e54bfff24ac6b" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.428947 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.428980 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pwwl\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.428992 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.429004 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.429029 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.429042 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.429053 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.438881 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.461638 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf" (OuterVolumeSpecName: "server-conf") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.463101 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.531570 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.531978 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532144 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532265 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532426 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532568 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzd99\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532854 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.533010 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.533121 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.533234 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.533959 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.534068 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.540467 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.541539 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.542721 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99" (OuterVolumeSpecName: "kube-api-access-dzd99") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "kube-api-access-dzd99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.544088 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.546424 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.557380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.562099 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.569455 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.569729 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info" (OuterVolumeSpecName: "pod-info") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.604075 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data" (OuterVolumeSpecName: "config-data") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.636362 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.636592 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.636692 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.636806 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.636936 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.637015 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.637105 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.637182 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.637255 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.637333 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzd99\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.647939 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.673265 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf" (OuterVolumeSpecName: "server-conf") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.725927 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.760353 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.764187 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.764611 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.767548 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:54 crc kubenswrapper[4739]: E0121 15:51:54.767933 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="setup-container" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.767956 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="setup-container" Jan 21 15:51:54 crc kubenswrapper[4739]: E0121 15:51:54.767970 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.767977 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: E0121 15:51:54.767986 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.767992 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: E0121 15:51:54.768006 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="setup-container" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.768011 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="setup-container" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.768200 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.768213 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.769386 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780217 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780433 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780533 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780627 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780725 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780662 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780701 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780957 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-46fx7" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.807876 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" path="/var/lib/kubelet/pods/807cb521-8cc2-4f29-9ff4-7138d251a817/volumes" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862423 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862506 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6rc\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-kube-api-access-gm6rc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862541 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-config-data\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862674 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862725 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862793 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862866 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862891 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862969 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.863157 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.863280 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.964946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965032 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965169 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965300 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965358 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965385 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965468 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965796 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.966067 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.966635 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.967727 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6rc\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-kube-api-access-gm6rc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.967776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.968423 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.968941 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.969362 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-config-data\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.970042 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-config-data\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.971429 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.972414 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.973191 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.996764 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6rc\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-kube-api-access-gm6rc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.000446 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.112639 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.327250 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerDied","Data":"9b30f94b9f3236e39738165e3f009216fa8c05c9ae2f0cee84393829c2ab8b70"} Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.327682 4739 scope.go:117] "RemoveContainer" containerID="0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.328003 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.371103 4739 scope.go:117] "RemoveContainer" containerID="f0dcb2eebe67208fcdb9e5d6e76eb2a8fc12f52316acc2632f85a265d4e75d72" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.380374 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.424976 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.438783 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.440517 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.450648 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.451106 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.451411 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.455451 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.455868 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.456293 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.456365 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hxngv" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.470716 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.582498 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23fcbb0d-682e-40b5-9921-f484672af568-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.582772 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.582911 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583158 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583472 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583616 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23fcbb0d-682e-40b5-9921-f484672af568-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583712 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-kube-api-access-pjs4v\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583860 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583991 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23fcbb0d-682e-40b5-9921-f484672af568-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686209 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686258 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686281 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686324 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686375 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686434 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23fcbb0d-682e-40b5-9921-f484672af568-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686451 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-kube-api-access-pjs4v\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.687940 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.689053 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.689188 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.689328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.689491 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.693357 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.697949 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.698098 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.698332 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23fcbb0d-682e-40b5-9921-f484672af568-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.713726 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23fcbb0d-682e-40b5-9921-f484672af568-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.714506 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-kube-api-access-pjs4v\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.730560 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: W0121 15:51:55.779852 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2e9da51_9cc3_45a5_ac25_c939b3ac2b1a.slice/crio-4d0822e86241067f56e79d43d48ac0401530d4c944ddde5c83a265db5448e49d WatchSource:0}: Error finding container 4d0822e86241067f56e79d43d48ac0401530d4c944ddde5c83a265db5448e49d: Status 404 returned error can't find the container with id 4d0822e86241067f56e79d43d48ac0401530d4c944ddde5c83a265db5448e49d Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.783993 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.786296 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:56 crc kubenswrapper[4739]: I0121 15:51:56.296104 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:56 crc kubenswrapper[4739]: I0121 15:51:56.345803 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23fcbb0d-682e-40b5-9921-f484672af568","Type":"ContainerStarted","Data":"626ad6d729fb7a5483aef1a58b1ee8138b003d390fb8960d710238a791a388c5"} Jan 21 15:51:56 crc kubenswrapper[4739]: I0121 15:51:56.350374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a","Type":"ContainerStarted","Data":"4d0822e86241067f56e79d43d48ac0401530d4c944ddde5c83a265db5448e49d"} Jan 21 15:51:56 crc kubenswrapper[4739]: I0121 15:51:56.795787 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" path="/var/lib/kubelet/pods/a6800cb6-6e4e-4300-9148-be2e0d2deb6d/volumes" Jan 21 15:51:57 crc kubenswrapper[4739]: I0121 15:51:57.365115 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a","Type":"ContainerStarted","Data":"228928e35a5a39e2880a5b76ca24c06eb7b6e07ff362ff6ea376408eb249c200"} Jan 21 15:51:58 crc kubenswrapper[4739]: I0121 15:51:58.374940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23fcbb0d-682e-40b5-9921-f484672af568","Type":"ContainerStarted","Data":"c32a953dc5d3d78ecfa91ed55b0b638109384028dc480bf120eba23be38bf741"} Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.886109 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-bpwhz"] Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.888499 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.890944 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.904960 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-bpwhz"] Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928400 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928464 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928527 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928580 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928616 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxv54\" (UniqueName: \"kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928684 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030452 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030481 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030550 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxv54\" (UniqueName: \"kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.031628 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.031661 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.031707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.031916 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.032215 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.037119 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-bpwhz"] Jan 21 15:52:03 crc kubenswrapper[4739]: E0121 15:52:03.037870 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-nxv54], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" podUID="f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.056947 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxv54\" (UniqueName: \"kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.072823 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.074340 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.094256 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132435 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4mtf\" (UniqueName: \"kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132598 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132681 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.234808 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.234941 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.235050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4mtf\" (UniqueName: \"kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.235111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.235172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.235213 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.236106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.236607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.237125 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.237930 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.238471 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.256939 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4mtf\" (UniqueName: \"kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.414942 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.425094 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.434298 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.437773 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.437980 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438081 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxv54\" (UniqueName: \"kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438201 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438210 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438414 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438594 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438627 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config" (OuterVolumeSpecName: "config") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438688 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.439310 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.439320 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.439332 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.439340 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.439348 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.441018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54" (OuterVolumeSpecName: "kube-api-access-nxv54") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "kube-api-access-nxv54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.540851 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxv54\" (UniqueName: \"kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.905515 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.330541 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7"] Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.332223 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.335527 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.335840 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.335969 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.336170 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.355263 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7"] Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.360229 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.360386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7qfh\" (UniqueName: \"kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.360478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.360621 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.446117 4739 generic.go:334] "Generic (PLEG): container finished" podID="065383f0-2fd3-46d3-b780-a1999eed338a" containerID="6b7f82392101231121bd9d219c9b766e79a351f9e8d64603cdec72240bcbff13" exitCode=0 Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.446236 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.450955 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" event={"ID":"065383f0-2fd3-46d3-b780-a1999eed338a","Type":"ContainerDied","Data":"6b7f82392101231121bd9d219c9b766e79a351f9e8d64603cdec72240bcbff13"} Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.451080 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" event={"ID":"065383f0-2fd3-46d3-b780-a1999eed338a","Type":"ContainerStarted","Data":"cde79d96dae17bcae68c41ffb55858e6bad85e2582e14dd416ed04377ea4fae9"} Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.462601 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.462703 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7qfh\" (UniqueName: \"kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.462750 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.462843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.472024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.487443 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.488222 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.512649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7qfh\" (UniqueName: \"kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.580226 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-bpwhz"] Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.590407 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-bpwhz"] Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.658740 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.795343 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" path="/var/lib/kubelet/pods/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a/volumes" Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.226961 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.228495 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.408650 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7"] Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.459084 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" event={"ID":"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2","Type":"ContainerStarted","Data":"73868253b5bd129f3efd8b2b966c6b6e33b1022f9e16f8a302c7234ce2f9b1b2"} Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.461925 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" event={"ID":"065383f0-2fd3-46d3-b780-a1999eed338a","Type":"ContainerStarted","Data":"f2317e99a6e0b5024f8f924bc76085025e020511c4cd89e868aecd576b5ef47b"} Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.462329 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.491938 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" podStartSLOduration=2.4919193379999998 podStartE2EDuration="2.491919338s" podCreationTimestamp="2026-01-21 15:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:05.484033848 +0000 UTC m=+1557.174740112" watchObservedRunningTime="2026-01-21 15:52:05.491919338 +0000 UTC m=+1557.182625602" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.436013 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.503619 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.503936 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="dnsmasq-dns" containerID="cri-o://711eb8f49973f8152061fe666bcde1b118422008db7d214584646d3fe5e6cde9" gracePeriod=10 Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.777938 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.779712 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.790611 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.881266 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.881578 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.881796 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjm4\" (UniqueName: \"kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.881955 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.883586 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.883742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985501 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985527 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985683 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985747 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjm4\" (UniqueName: \"kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.986876 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.988798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.988842 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.989630 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.990485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:14 crc kubenswrapper[4739]: I0121 15:52:14.022758 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjm4\" (UniqueName: \"kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:14 crc kubenswrapper[4739]: I0121 15:52:14.110524 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:14 crc kubenswrapper[4739]: I0121 15:52:14.576765 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerID="711eb8f49973f8152061fe666bcde1b118422008db7d214584646d3fe5e6cde9" exitCode=0 Jan 21 15:52:14 crc kubenswrapper[4739]: I0121 15:52:14.576834 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" event={"ID":"ac0420ff-cde9-4c4c-962a-ac17b202c464","Type":"ContainerDied","Data":"711eb8f49973f8152061fe666bcde1b118422008db7d214584646d3fe5e6cde9"} Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.704129 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.806793 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 15:52:15 crc kubenswrapper[4739]: W0121 15:52:15.808023 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7eae90b_f949_4872_a985_1066d94b337a.slice/crio-f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6 WatchSource:0}: Error finding container f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6: Status 404 returned error can't find the container with id f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6 Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.823219 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc\") pod \"ac0420ff-cde9-4c4c-962a-ac17b202c464\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.823308 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqrsc\" (UniqueName: \"kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc\") pod \"ac0420ff-cde9-4c4c-962a-ac17b202c464\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.823386 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb\") pod \"ac0420ff-cde9-4c4c-962a-ac17b202c464\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.823523 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb\") pod \"ac0420ff-cde9-4c4c-962a-ac17b202c464\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.823589 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config\") pod \"ac0420ff-cde9-4c4c-962a-ac17b202c464\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.046490 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc" (OuterVolumeSpecName: "kube-api-access-sqrsc") pod "ac0420ff-cde9-4c4c-962a-ac17b202c464" (UID: "ac0420ff-cde9-4c4c-962a-ac17b202c464"). InnerVolumeSpecName "kube-api-access-sqrsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.089562 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac0420ff-cde9-4c4c-962a-ac17b202c464" (UID: "ac0420ff-cde9-4c4c-962a-ac17b202c464"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.090891 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config" (OuterVolumeSpecName: "config") pod "ac0420ff-cde9-4c4c-962a-ac17b202c464" (UID: "ac0420ff-cde9-4c4c-962a-ac17b202c464"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.092444 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac0420ff-cde9-4c4c-962a-ac17b202c464" (UID: "ac0420ff-cde9-4c4c-962a-ac17b202c464"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.099691 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac0420ff-cde9-4c4c-962a-ac17b202c464" (UID: "ac0420ff-cde9-4c4c-962a-ac17b202c464"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.132001 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.132055 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.132070 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.132093 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqrsc\" (UniqueName: \"kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.132108 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.597566 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerStarted","Data":"f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6"} Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.600060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" event={"ID":"ac0420ff-cde9-4c4c-962a-ac17b202c464","Type":"ContainerDied","Data":"e65378337dcd3c38499ff1fbfaf8625a7df13d3ddd68c2a9c27a0aa444ae5bb1"} Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.600120 4739 scope.go:117] "RemoveContainer" containerID="711eb8f49973f8152061fe666bcde1b118422008db7d214584646d3fe5e6cde9" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.600165 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.625223 4739 scope.go:117] "RemoveContainer" containerID="35d47c7267aa8cc8159c0480b70e21a1401412a18112ef07ae7b4c5fb230f812" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.646416 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.656256 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.795178 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" path="/var/lib/kubelet/pods/ac0420ff-cde9-4c4c-962a-ac17b202c464/volumes" Jan 21 15:52:20 crc kubenswrapper[4739]: I0121 15:52:20.639496 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerStarted","Data":"1cb06a065f7b359be2df20293554b36493e66c0a9ef2d4e5bc69e0816ccf0cb3"} Jan 21 15:52:22 crc kubenswrapper[4739]: I0121 15:52:22.659390 4739 generic.go:334] "Generic (PLEG): container finished" podID="c7eae90b-f949-4872-a985-1066d94b337a" containerID="1cb06a065f7b359be2df20293554b36493e66c0a9ef2d4e5bc69e0816ccf0cb3" exitCode=0 Jan 21 15:52:22 crc kubenswrapper[4739]: I0121 15:52:22.659794 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerDied","Data":"1cb06a065f7b359be2df20293554b36493e66c0a9ef2d4e5bc69e0816ccf0cb3"} Jan 21 15:52:26 crc kubenswrapper[4739]: I0121 15:52:26.650566 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:52:26 crc kubenswrapper[4739]: I0121 15:52:26.710187 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerStarted","Data":"b27ed62b7c32459024ab3fd4b53954e10ea5e93107d757fa3a9ea1ab2333c61c"} Jan 21 15:52:27 crc kubenswrapper[4739]: I0121 15:52:27.722104 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" event={"ID":"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2","Type":"ContainerStarted","Data":"0ee79ebdfe1a75667f817da0116bf381fa0db6936107a920acd6ac58e38ce594"} Jan 21 15:52:27 crc kubenswrapper[4739]: I0121 15:52:27.722246 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:27 crc kubenswrapper[4739]: I0121 15:52:27.743724 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" podStartSLOduration=14.743703723 podStartE2EDuration="14.743703723s" podCreationTimestamp="2026-01-21 15:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:27.738659833 +0000 UTC m=+1579.429366107" watchObservedRunningTime="2026-01-21 15:52:27.743703723 +0000 UTC m=+1579.434409987" Jan 21 15:52:28 crc kubenswrapper[4739]: I0121 15:52:28.753868 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" podStartSLOduration=3.522859741 podStartE2EDuration="24.753844726s" podCreationTimestamp="2026-01-21 15:52:04 +0000 UTC" firstStartedPulling="2026-01-21 15:52:05.416489672 +0000 UTC m=+1557.107195946" lastFinishedPulling="2026-01-21 15:52:26.647474667 +0000 UTC m=+1578.338180931" observedRunningTime="2026-01-21 15:52:28.745186494 +0000 UTC m=+1580.435892768" watchObservedRunningTime="2026-01-21 15:52:28.753844726 +0000 UTC m=+1580.444550990" Jan 21 15:52:30 crc kubenswrapper[4739]: I0121 15:52:30.750007 4739 generic.go:334] "Generic (PLEG): container finished" podID="23fcbb0d-682e-40b5-9921-f484672af568" containerID="c32a953dc5d3d78ecfa91ed55b0b638109384028dc480bf120eba23be38bf741" exitCode=0 Jan 21 15:52:30 crc kubenswrapper[4739]: I0121 15:52:30.750061 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23fcbb0d-682e-40b5-9921-f484672af568","Type":"ContainerDied","Data":"c32a953dc5d3d78ecfa91ed55b0b638109384028dc480bf120eba23be38bf741"} Jan 21 15:52:30 crc kubenswrapper[4739]: I0121 15:52:30.754352 4739 generic.go:334] "Generic (PLEG): container finished" podID="c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a" containerID="228928e35a5a39e2880a5b76ca24c06eb7b6e07ff362ff6ea376408eb249c200" exitCode=0 Jan 21 15:52:30 crc kubenswrapper[4739]: I0121 15:52:30.754411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a","Type":"ContainerDied","Data":"228928e35a5a39e2880a5b76ca24c06eb7b6e07ff362ff6ea376408eb249c200"} Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.770875 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a","Type":"ContainerStarted","Data":"9dd68ca8faf43ba1faf607c3e9d5e2cb3da863a564a85c7936c83b546390721a"} Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.771547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.775406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23fcbb0d-682e-40b5-9921-f484672af568","Type":"ContainerStarted","Data":"63f4e4712944b2734e6ba6d0cfc8c24669fe92e7ede51b8aa98742a814fb81cb"} Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.775854 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.799460 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.799433686 podStartE2EDuration="37.799433686s" podCreationTimestamp="2026-01-21 15:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:31.798058768 +0000 UTC m=+1583.488765032" watchObservedRunningTime="2026-01-21 15:52:31.799433686 +0000 UTC m=+1583.490139950" Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.833979 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.833937579 podStartE2EDuration="36.833937579s" podCreationTimestamp="2026-01-21 15:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:31.822544491 +0000 UTC m=+1583.513250765" watchObservedRunningTime="2026-01-21 15:52:31.833937579 +0000 UTC m=+1583.524643843" Jan 21 15:52:34 crc kubenswrapper[4739]: I0121 15:52:34.113015 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:34 crc kubenswrapper[4739]: I0121 15:52:34.183336 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:34 crc kubenswrapper[4739]: I0121 15:52:34.183583 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="dnsmasq-dns" containerID="cri-o://f2317e99a6e0b5024f8f924bc76085025e020511c4cd89e868aecd576b5ef47b" gracePeriod=10 Jan 21 15:52:34 crc kubenswrapper[4739]: I0121 15:52:34.804188 4739 generic.go:334] "Generic (PLEG): container finished" podID="065383f0-2fd3-46d3-b780-a1999eed338a" containerID="f2317e99a6e0b5024f8f924bc76085025e020511c4cd89e868aecd576b5ef47b" exitCode=0 Jan 21 15:52:34 crc kubenswrapper[4739]: I0121 15:52:34.804379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" event={"ID":"065383f0-2fd3-46d3-b780-a1999eed338a","Type":"ContainerDied","Data":"f2317e99a6e0b5024f8f924bc76085025e020511c4cd89e868aecd576b5ef47b"} Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.222744 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.223217 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.223267 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.224242 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.224303 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" gracePeriod=600 Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.259231 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.411745 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.411858 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.411923 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.412044 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.412123 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.412169 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4mtf\" (UniqueName: \"kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.815305 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" exitCode=0 Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.815386 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896"} Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.815424 4739 scope.go:117] "RemoveContainer" containerID="f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4" Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.817803 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" event={"ID":"065383f0-2fd3-46d3-b780-a1999eed338a","Type":"ContainerDied","Data":"cde79d96dae17bcae68c41ffb55858e6bad85e2582e14dd416ed04377ea4fae9"} Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.817943 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.449092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf" (OuterVolumeSpecName: "kube-api-access-q4mtf") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "kube-api-access-q4mtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.490792 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.495572 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.499801 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.511532 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config" (OuterVolumeSpecName: "config") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.511899 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534301 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534557 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4mtf\" (UniqueName: \"kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534641 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534725 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534795 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534883 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.747810 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.756789 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.792749 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" path="/var/lib/kubelet/pods/065383f0-2fd3-46d3-b780-a1999eed338a/volumes" Jan 21 15:52:37 crc kubenswrapper[4739]: E0121 15:52:37.028750 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:52:37 crc kubenswrapper[4739]: I0121 15:52:37.056959 4739 scope.go:117] "RemoveContainer" containerID="f2317e99a6e0b5024f8f924bc76085025e020511c4cd89e868aecd576b5ef47b" Jan 21 15:52:37 crc kubenswrapper[4739]: I0121 15:52:37.082333 4739 scope.go:117] "RemoveContainer" containerID="6b7f82392101231121bd9d219c9b766e79a351f9e8d64603cdec72240bcbff13" Jan 21 15:52:37 crc kubenswrapper[4739]: I0121 15:52:37.836550 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:52:37 crc kubenswrapper[4739]: E0121 15:52:37.836873 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:52:44 crc kubenswrapper[4739]: I0121 15:52:44.894888 4739 generic.go:334] "Generic (PLEG): container finished" podID="9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" containerID="0ee79ebdfe1a75667f817da0116bf381fa0db6936107a920acd6ac58e38ce594" exitCode=0 Jan 21 15:52:44 crc kubenswrapper[4739]: I0121 15:52:44.894993 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" event={"ID":"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2","Type":"ContainerDied","Data":"0ee79ebdfe1a75667f817da0116bf381fa0db6936107a920acd6ac58e38ce594"} Jan 21 15:52:45 crc kubenswrapper[4739]: I0121 15:52:45.117393 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.190:5671: connect: connection refused" Jan 21 15:52:45 crc kubenswrapper[4739]: I0121 15:52:45.790991 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.493564 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.530279 4739 scope.go:117] "RemoveContainer" containerID="e37b1e761d750a12e55f660697a2121e6853eaa8c220d4d98e18cd4f531d6534" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.574463 4739 scope.go:117] "RemoveContainer" containerID="67ede1f57e10de2b54ce862f290642acfd3930e7dcfa913153ce81d6cf99c84b" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.629096 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam\") pod \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.629327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7qfh\" (UniqueName: \"kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh\") pod \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.630359 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle\") pod \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.630404 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory\") pod \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.636125 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh" (OuterVolumeSpecName: "kube-api-access-l7qfh") pod "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" (UID: "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2"). InnerVolumeSpecName "kube-api-access-l7qfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.637240 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" (UID: "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.697580 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" (UID: "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.698198 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory" (OuterVolumeSpecName: "inventory") pod "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" (UID: "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.735200 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.735243 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7qfh\" (UniqueName: \"kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.735257 4739 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.735269 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.925884 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" event={"ID":"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2","Type":"ContainerDied","Data":"73868253b5bd129f3efd8b2b966c6b6e33b1022f9e16f8a302c7234ce2f9b1b2"} Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.925951 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73868253b5bd129f3efd8b2b966c6b6e33b1022f9e16f8a302c7234ce2f9b1b2" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.925964 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.997026 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn"] Jan 21 15:52:46 crc kubenswrapper[4739]: E0121 15:52:46.997637 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.997754 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: E0121 15:52:46.997860 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.997953 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: E0121 15:52:46.998031 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="init" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998107 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="init" Jan 21 15:52:46 crc kubenswrapper[4739]: E0121 15:52:46.998185 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="init" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998236 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="init" Jan 21 15:52:46 crc kubenswrapper[4739]: E0121 15:52:46.998309 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998362 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998579 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998653 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998748 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.999543 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.002573 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.002847 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.002980 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.003265 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.020376 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn"] Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.143714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdrc9\" (UniqueName: \"kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.143851 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.143902 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.143985 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.245616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.245740 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdrc9\" (UniqueName: \"kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.245807 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.245861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.250608 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.250633 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.251580 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.264946 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdrc9\" (UniqueName: \"kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.315695 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.872390 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn"] Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.936590 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" event={"ID":"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953","Type":"ContainerStarted","Data":"d632ebf7f70ccf3c830bb996407d7bbfc55e89dfd3fcdba0d66d6cceb37779bb"} Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.516215 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.518628 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.528120 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.587905 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.588054 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpq6w\" (UniqueName: \"kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.588104 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.689783 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.689918 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpq6w\" (UniqueName: \"kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.689960 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.690546 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.690916 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.711685 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpq6w\" (UniqueName: \"kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.865223 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.962658 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" event={"ID":"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953","Type":"ContainerStarted","Data":"51d07f40482acab81b9632173fbbbfe5bbb70a28e7ce9e1f858999b12a002abd"} Jan 21 15:52:50 crc kubenswrapper[4739]: I0121 15:52:50.005663 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" podStartSLOduration=2.527764258 podStartE2EDuration="4.00564529s" podCreationTimestamp="2026-01-21 15:52:46 +0000 UTC" firstStartedPulling="2026-01-21 15:52:47.889390015 +0000 UTC m=+1599.580096279" lastFinishedPulling="2026-01-21 15:52:49.367271047 +0000 UTC m=+1601.057977311" observedRunningTime="2026-01-21 15:52:49.981740363 +0000 UTC m=+1601.672446627" watchObservedRunningTime="2026-01-21 15:52:50.00564529 +0000 UTC m=+1601.696351554" Jan 21 15:52:50 crc kubenswrapper[4739]: W0121 15:52:50.315944 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2a6cab3_6566_4d9b_b326_f0d61563d2be.slice/crio-7aebf9ad6a0143d854c857fefa8904aed9d82d6f9661e9e3d004b999e0ced80a WatchSource:0}: Error finding container 7aebf9ad6a0143d854c857fefa8904aed9d82d6f9661e9e3d004b999e0ced80a: Status 404 returned error can't find the container with id 7aebf9ad6a0143d854c857fefa8904aed9d82d6f9661e9e3d004b999e0ced80a Jan 21 15:52:50 crc kubenswrapper[4739]: I0121 15:52:50.319282 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:52:50 crc kubenswrapper[4739]: I0121 15:52:50.981409 4739 generic.go:334] "Generic (PLEG): container finished" podID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerID="63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98" exitCode=0 Jan 21 15:52:50 crc kubenswrapper[4739]: I0121 15:52:50.983069 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerDied","Data":"63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98"} Jan 21 15:52:50 crc kubenswrapper[4739]: I0121 15:52:50.983102 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerStarted","Data":"7aebf9ad6a0143d854c857fefa8904aed9d82d6f9661e9e3d004b999e0ced80a"} Jan 21 15:52:51 crc kubenswrapper[4739]: I0121 15:52:51.783581 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:52:51 crc kubenswrapper[4739]: E0121 15:52:51.784256 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:52:51 crc kubenswrapper[4739]: I0121 15:52:51.994209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerStarted","Data":"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c"} Jan 21 15:52:55 crc kubenswrapper[4739]: I0121 15:52:55.031600 4739 generic.go:334] "Generic (PLEG): container finished" podID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerID="bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c" exitCode=0 Jan 21 15:52:55 crc kubenswrapper[4739]: I0121 15:52:55.031677 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerDied","Data":"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c"} Jan 21 15:52:55 crc kubenswrapper[4739]: I0121 15:52:55.037437 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:52:55 crc kubenswrapper[4739]: I0121 15:52:55.116017 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:52:57 crc kubenswrapper[4739]: I0121 15:52:57.058917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerStarted","Data":"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1"} Jan 21 15:52:57 crc kubenswrapper[4739]: I0121 15:52:57.078031 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w8ftq" podStartSLOduration=3.203533203 podStartE2EDuration="8.078009305s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="2026-01-21 15:52:50.983846341 +0000 UTC m=+1602.674552605" lastFinishedPulling="2026-01-21 15:52:55.858322443 +0000 UTC m=+1607.549028707" observedRunningTime="2026-01-21 15:52:57.076534334 +0000 UTC m=+1608.767240598" watchObservedRunningTime="2026-01-21 15:52:57.078009305 +0000 UTC m=+1608.768715569" Jan 21 15:52:59 crc kubenswrapper[4739]: I0121 15:52:59.872703 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:59 crc kubenswrapper[4739]: I0121 15:52:59.873286 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:59 crc kubenswrapper[4739]: I0121 15:52:59.924872 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:53:02 crc kubenswrapper[4739]: I0121 15:53:02.783547 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:53:02 crc kubenswrapper[4739]: E0121 15:53:02.784176 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:53:09 crc kubenswrapper[4739]: I0121 15:53:09.925448 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:53:09 crc kubenswrapper[4739]: I0121 15:53:09.997757 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.179606 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w8ftq" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="registry-server" containerID="cri-o://0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1" gracePeriod=2 Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.721426 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.877337 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities\") pod \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.877655 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpq6w\" (UniqueName: \"kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w\") pod \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.877890 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content\") pod \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.879020 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities" (OuterVolumeSpecName: "utilities") pod "d2a6cab3-6566-4d9b-b326-f0d61563d2be" (UID: "d2a6cab3-6566-4d9b-b326-f0d61563d2be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.883045 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w" (OuterVolumeSpecName: "kube-api-access-bpq6w") pod "d2a6cab3-6566-4d9b-b326-f0d61563d2be" (UID: "d2a6cab3-6566-4d9b-b326-f0d61563d2be"). InnerVolumeSpecName "kube-api-access-bpq6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.934449 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2a6cab3-6566-4d9b-b326-f0d61563d2be" (UID: "d2a6cab3-6566-4d9b-b326-f0d61563d2be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.980054 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.980101 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpq6w\" (UniqueName: \"kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.980116 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.189134 4739 generic.go:334] "Generic (PLEG): container finished" podID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerID="0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1" exitCode=0 Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.189179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerDied","Data":"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1"} Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.189208 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerDied","Data":"7aebf9ad6a0143d854c857fefa8904aed9d82d6f9661e9e3d004b999e0ced80a"} Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.189227 4739 scope.go:117] "RemoveContainer" containerID="0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.189354 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.217650 4739 scope.go:117] "RemoveContainer" containerID="bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.226425 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.235514 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.251325 4739 scope.go:117] "RemoveContainer" containerID="63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.286478 4739 scope.go:117] "RemoveContainer" containerID="0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1" Jan 21 15:53:11 crc kubenswrapper[4739]: E0121 15:53:11.287237 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1\": container with ID starting with 0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1 not found: ID does not exist" containerID="0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.287412 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1"} err="failed to get container status \"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1\": rpc error: code = NotFound desc = could not find container \"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1\": container with ID starting with 0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1 not found: ID does not exist" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.287558 4739 scope.go:117] "RemoveContainer" containerID="bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c" Jan 21 15:53:11 crc kubenswrapper[4739]: E0121 15:53:11.288345 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c\": container with ID starting with bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c not found: ID does not exist" containerID="bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.288488 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c"} err="failed to get container status \"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c\": rpc error: code = NotFound desc = could not find container \"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c\": container with ID starting with bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c not found: ID does not exist" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.288596 4739 scope.go:117] "RemoveContainer" containerID="63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98" Jan 21 15:53:11 crc kubenswrapper[4739]: E0121 15:53:11.288940 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98\": container with ID starting with 63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98 not found: ID does not exist" containerID="63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.288965 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98"} err="failed to get container status \"63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98\": rpc error: code = NotFound desc = could not find container \"63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98\": container with ID starting with 63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98 not found: ID does not exist" Jan 21 15:53:12 crc kubenswrapper[4739]: I0121 15:53:12.794607 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" path="/var/lib/kubelet/pods/d2a6cab3-6566-4d9b-b326-f0d61563d2be/volumes" Jan 21 15:53:16 crc kubenswrapper[4739]: I0121 15:53:16.783692 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:53:16 crc kubenswrapper[4739]: E0121 15:53:16.784427 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:53:27 crc kubenswrapper[4739]: I0121 15:53:27.782688 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:53:27 crc kubenswrapper[4739]: E0121 15:53:27.783458 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.945622 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:38 crc kubenswrapper[4739]: E0121 15:53:38.946609 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="extract-content" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.946625 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="extract-content" Jan 21 15:53:38 crc kubenswrapper[4739]: E0121 15:53:38.946638 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="registry-server" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.946648 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="registry-server" Jan 21 15:53:38 crc kubenswrapper[4739]: E0121 15:53:38.946661 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="extract-utilities" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.946668 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="extract-utilities" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.949681 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="registry-server" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.966313 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.976147 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.114504 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqkc6\" (UniqueName: \"kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.114890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.115091 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.217126 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqkc6\" (UniqueName: \"kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.217598 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.217763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.219080 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.219122 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.243327 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqkc6\" (UniqueName: \"kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.289066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.844105 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:40 crc kubenswrapper[4739]: I0121 15:53:40.446361 4739 generic.go:334] "Generic (PLEG): container finished" podID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerID="d50b1c2331f238559480a904d62c2efc0cf6656d7274b0e8da06cbeb17df2645" exitCode=0 Jan 21 15:53:40 crc kubenswrapper[4739]: I0121 15:53:40.446417 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerDied","Data":"d50b1c2331f238559480a904d62c2efc0cf6656d7274b0e8da06cbeb17df2645"} Jan 21 15:53:40 crc kubenswrapper[4739]: I0121 15:53:40.446639 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerStarted","Data":"3c382df4acd2d7df658921969ee6b8973ac979b90e3a953d69b8f118eac72307"} Jan 21 15:53:41 crc kubenswrapper[4739]: I0121 15:53:41.455949 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerStarted","Data":"85e28924b52a795e58e9429a2833053e68657061d1b45072abf3cc2518213400"} Jan 21 15:53:41 crc kubenswrapper[4739]: I0121 15:53:41.782753 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:53:41 crc kubenswrapper[4739]: E0121 15:53:41.783034 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:53:43 crc kubenswrapper[4739]: I0121 15:53:43.503179 4739 generic.go:334] "Generic (PLEG): container finished" podID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerID="85e28924b52a795e58e9429a2833053e68657061d1b45072abf3cc2518213400" exitCode=0 Jan 21 15:53:43 crc kubenswrapper[4739]: I0121 15:53:43.503265 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerDied","Data":"85e28924b52a795e58e9429a2833053e68657061d1b45072abf3cc2518213400"} Jan 21 15:53:44 crc kubenswrapper[4739]: I0121 15:53:44.514417 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerStarted","Data":"8d3f6481994e5edad6092e707144831d7d8fa94f226f86295de76ab19f61d3fb"} Jan 21 15:53:44 crc kubenswrapper[4739]: I0121 15:53:44.550511 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gf87f" podStartSLOduration=3.111452439 podStartE2EDuration="6.550489593s" podCreationTimestamp="2026-01-21 15:53:38 +0000 UTC" firstStartedPulling="2026-01-21 15:53:40.44894972 +0000 UTC m=+1652.139655984" lastFinishedPulling="2026-01-21 15:53:43.887986874 +0000 UTC m=+1655.578693138" observedRunningTime="2026-01-21 15:53:44.539584075 +0000 UTC m=+1656.230290359" watchObservedRunningTime="2026-01-21 15:53:44.550489593 +0000 UTC m=+1656.241195857" Jan 21 15:53:46 crc kubenswrapper[4739]: I0121 15:53:46.766056 4739 scope.go:117] "RemoveContainer" containerID="90009f7b34730ca27e064de96b8ae6bbb3e5323e5202e1238816fdc37b06b514" Jan 21 15:53:49 crc kubenswrapper[4739]: I0121 15:53:49.289934 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:49 crc kubenswrapper[4739]: I0121 15:53:49.290261 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:49 crc kubenswrapper[4739]: I0121 15:53:49.383411 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:49 crc kubenswrapper[4739]: I0121 15:53:49.619871 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:49 crc kubenswrapper[4739]: I0121 15:53:49.671861 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:51 crc kubenswrapper[4739]: I0121 15:53:51.585409 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gf87f" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="registry-server" containerID="cri-o://8d3f6481994e5edad6092e707144831d7d8fa94f226f86295de76ab19f61d3fb" gracePeriod=2 Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.615473 4739 generic.go:334] "Generic (PLEG): container finished" podID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerID="8d3f6481994e5edad6092e707144831d7d8fa94f226f86295de76ab19f61d3fb" exitCode=0 Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.615763 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerDied","Data":"8d3f6481994e5edad6092e707144831d7d8fa94f226f86295de76ab19f61d3fb"} Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.821593 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.890298 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content\") pod \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.890621 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities\") pod \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.890729 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqkc6\" (UniqueName: \"kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6\") pod \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.891943 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities" (OuterVolumeSpecName: "utilities") pod "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" (UID: "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.893144 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.920711 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6" (OuterVolumeSpecName: "kube-api-access-cqkc6") pod "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" (UID: "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2"). InnerVolumeSpecName "kube-api-access-cqkc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.954067 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" (UID: "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.994885 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.994938 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqkc6\" (UniqueName: \"kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.628237 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerDied","Data":"3c382df4acd2d7df658921969ee6b8973ac979b90e3a953d69b8f118eac72307"} Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.628289 4739 scope.go:117] "RemoveContainer" containerID="8d3f6481994e5edad6092e707144831d7d8fa94f226f86295de76ab19f61d3fb" Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.629350 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.655731 4739 scope.go:117] "RemoveContainer" containerID="85e28924b52a795e58e9429a2833053e68657061d1b45072abf3cc2518213400" Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.666755 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.676134 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.682504 4739 scope.go:117] "RemoveContainer" containerID="d50b1c2331f238559480a904d62c2efc0cf6656d7274b0e8da06cbeb17df2645" Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.783588 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:53:53 crc kubenswrapper[4739]: E0121 15:53:53.784323 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:53:54 crc kubenswrapper[4739]: I0121 15:53:54.794878 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" path="/var/lib/kubelet/pods/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2/volumes" Jan 21 15:54:08 crc kubenswrapper[4739]: I0121 15:54:08.784160 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:54:08 crc kubenswrapper[4739]: E0121 15:54:08.785035 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:54:19 crc kubenswrapper[4739]: I0121 15:54:19.783531 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:54:19 crc kubenswrapper[4739]: E0121 15:54:19.784359 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:54:31 crc kubenswrapper[4739]: I0121 15:54:31.783332 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:54:31 crc kubenswrapper[4739]: E0121 15:54:31.783993 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:54:42 crc kubenswrapper[4739]: I0121 15:54:42.784580 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:54:42 crc kubenswrapper[4739]: E0121 15:54:42.785368 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:54:57 crc kubenswrapper[4739]: I0121 15:54:57.783369 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:54:57 crc kubenswrapper[4739]: E0121 15:54:57.784461 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:55:11 crc kubenswrapper[4739]: I0121 15:55:11.783457 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:55:11 crc kubenswrapper[4739]: E0121 15:55:11.785214 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:55:22 crc kubenswrapper[4739]: I0121 15:55:22.783642 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:55:22 crc kubenswrapper[4739]: E0121 15:55:22.784680 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:55:37 crc kubenswrapper[4739]: I0121 15:55:37.783893 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:55:37 crc kubenswrapper[4739]: E0121 15:55:37.784661 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:55:46 crc kubenswrapper[4739]: I0121 15:55:46.927841 4739 scope.go:117] "RemoveContainer" containerID="bc9e119eff2e7a6c529493da874d386d6c6032a66d8565d65b50219ca616276b" Jan 21 15:55:46 crc kubenswrapper[4739]: I0121 15:55:46.959396 4739 scope.go:117] "RemoveContainer" containerID="e1a0cfec5d871a1c191a6f0ceeb52e1244f4d502d752ae4eac06d1e06bae88e6" Jan 21 15:55:47 crc kubenswrapper[4739]: I0121 15:55:46.999764 4739 scope.go:117] "RemoveContainer" containerID="e3b39c9c97114dd0136f345c99d7b037721d21f078a00fb78c42b0a3b24d68c0" Jan 21 15:55:47 crc kubenswrapper[4739]: I0121 15:55:47.024684 4739 scope.go:117] "RemoveContainer" containerID="7d1f49a7e691f354754bbffb98546428a5ee0192e0097bc7632c31b508b3cdc3" Jan 21 15:55:48 crc kubenswrapper[4739]: I0121 15:55:48.793046 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:55:48 crc kubenswrapper[4739]: E0121 15:55:48.793582 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:02 crc kubenswrapper[4739]: I0121 15:56:02.790020 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:56:02 crc kubenswrapper[4739]: E0121 15:56:02.790916 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.054074 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-abc8-account-create-update-fm7tf"] Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.065140 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9f59-account-create-update-7sbc4"] Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.077057 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-56sxt"] Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.088699 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9f59-account-create-update-7sbc4"] Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.096279 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-abc8-account-create-update-fm7tf"] Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.108289 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-56sxt"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.039512 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8255-account-create-update-2tksx"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.051592 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-bbwz7"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.059052 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-d45dw"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.066276 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8255-account-create-update-2tksx"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.073278 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-bbwz7"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.080690 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-d45dw"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.794432 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="236f8c92-05a6-4512-a96e-61babb7c44e6" path="/var/lib/kubelet/pods/236f8c92-05a6-4512-a96e-61babb7c44e6/volumes" Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.795350 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb43d43-ff94-49b3-9b9c-6db46b040c95" path="/var/lib/kubelet/pods/2fb43d43-ff94-49b3-9b9c-6db46b040c95/volumes" Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.796017 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612cd690-e4aa-49df-862b-3484cc15bac0" path="/var/lib/kubelet/pods/612cd690-e4aa-49df-862b-3484cc15bac0/volumes" Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.796665 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93643236-1032-4392-8463-f9e48dc2ae84" path="/var/lib/kubelet/pods/93643236-1032-4392-8463-f9e48dc2ae84/volumes" Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.797978 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2b900b-3c0d-4958-ba5b-627101c68acb" path="/var/lib/kubelet/pods/9a2b900b-3c0d-4958-ba5b-627101c68acb/volumes" Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.798631 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc4447d-5821-489f-942f-ce925194a473" path="/var/lib/kubelet/pods/9dc4447d-5821-489f-942f-ce925194a473/volumes" Jan 21 15:56:15 crc kubenswrapper[4739]: I0121 15:56:15.783051 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:56:15 crc kubenswrapper[4739]: E0121 15:56:15.784114 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:29 crc kubenswrapper[4739]: I0121 15:56:29.782407 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:56:29 crc kubenswrapper[4739]: E0121 15:56:29.783147 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.045105 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hr5n6"] Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.052923 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-70e6-account-create-update-k6c57"] Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.061317 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5xglw"] Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.071511 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-70e6-account-create-update-k6c57"] Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.079654 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hr5n6"] Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.086861 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5xglw"] Jan 21 15:56:32 crc kubenswrapper[4739]: I0121 15:56:32.802769 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" path="/var/lib/kubelet/pods/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0/volumes" Jan 21 15:56:32 crc kubenswrapper[4739]: I0121 15:56:32.804262 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a0eafc-020a-44b3-a392-6b8eea12109e" path="/var/lib/kubelet/pods/b8a0eafc-020a-44b3-a392-6b8eea12109e/volumes" Jan 21 15:56:32 crc kubenswrapper[4739]: I0121 15:56:32.804953 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8da5917-a0c7-4e03-b13a-5d3af63e49bd" path="/var/lib/kubelet/pods/c8da5917-a0c7-4e03-b13a-5d3af63e49bd/volumes" Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.032939 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-965e-account-create-update-plfg9"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.041000 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e253-account-create-update-h4rrg"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.052037 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lwrxr"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.059430 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e253-account-create-update-h4rrg"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.067016 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lwrxr"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.074743 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-965e-account-create-update-plfg9"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.081574 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lnjht"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.115911 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lnjht"] Jan 21 15:56:38 crc kubenswrapper[4739]: I0121 15:56:38.797770 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5e4610-5432-4990-9e2b-a2d084e8316f" path="/var/lib/kubelet/pods/5f5e4610-5432-4990-9e2b-a2d084e8316f/volumes" Jan 21 15:56:38 crc kubenswrapper[4739]: I0121 15:56:38.799257 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6589cf07-234c-4ade-ad9b-8525147c0c5e" path="/var/lib/kubelet/pods/6589cf07-234c-4ade-ad9b-8525147c0c5e/volumes" Jan 21 15:56:38 crc kubenswrapper[4739]: I0121 15:56:38.800172 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19632c0-51a3-472e-a64c-33e82057e0aa" path="/var/lib/kubelet/pods/a19632c0-51a3-472e-a64c-33e82057e0aa/volumes" Jan 21 15:56:38 crc kubenswrapper[4739]: I0121 15:56:38.801148 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b6e9ee-dc03-4f47-a467-68d20988d0d5" path="/var/lib/kubelet/pods/c3b6e9ee-dc03-4f47-a467-68d20988d0d5/volumes" Jan 21 15:56:44 crc kubenswrapper[4739]: I0121 15:56:44.782777 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:56:44 crc kubenswrapper[4739]: E0121 15:56:44.783569 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:46 crc kubenswrapper[4739]: I0121 15:56:46.031955 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kldms"] Jan 21 15:56:46 crc kubenswrapper[4739]: I0121 15:56:46.038788 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kldms"] Jan 21 15:56:46 crc kubenswrapper[4739]: I0121 15:56:46.792109 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" path="/var/lib/kubelet/pods/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c/volumes" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.078490 4739 scope.go:117] "RemoveContainer" containerID="310490a298abeace1cf59d9fd171eb1de98117d19a8e395d35525e477ff44eec" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.105720 4739 scope.go:117] "RemoveContainer" containerID="ab9715eff2cb5eae5927f0214265318bbcc26cd2d7c73436a080a561302a86e4" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.171369 4739 scope.go:117] "RemoveContainer" containerID="d28a5056748fd0798e548eead6f029d14186c37e5aff84b6c64ff0b00b3f97a6" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.221753 4739 scope.go:117] "RemoveContainer" containerID="418872e78d0be96d75bdb10081118e4656d854a9e567d1e5ceebedc138e05830" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.243251 4739 scope.go:117] "RemoveContainer" containerID="e07f8d37aea6da4ada3cd9a853c51d272848fc36e109cf56f13b4afa66174819" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.259468 4739 scope.go:117] "RemoveContainer" containerID="592715eb0a04dfcc49c6ce19c56c1dfafe0e681ba65a4d5737645200e7d3a0bb" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.283275 4739 scope.go:117] "RemoveContainer" containerID="1243f86ee15a1aeee0d4b18e428ad0cfefd41c45c84c4000ee8aaf929ddd0e6f" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.339220 4739 scope.go:117] "RemoveContainer" containerID="f3cf97ad8ac4ce1bd48d9acd7e646dcf11cea945a9fccb97ce93590e4fa2034e" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.381388 4739 scope.go:117] "RemoveContainer" containerID="92ad25f64af551e1916f184b9f02d4fe9167b8fddc62416eeef99fc0a60f2b23" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.415633 4739 scope.go:117] "RemoveContainer" containerID="92d68e17dbcf0c2849e6ce7e96ab8fa463a4b8c4cf1cc86bf449fd641b8b3d1f" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.439067 4739 scope.go:117] "RemoveContainer" containerID="ce49abdf77aa797d6c92f537a94ec8d2d9cf907c3c3ab08afab79bb008fd5d6a" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.468089 4739 scope.go:117] "RemoveContainer" containerID="af68ca059d6c0ec949ea589740194d780f4a64571719339be11dc4fd39d8cccd" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.510079 4739 scope.go:117] "RemoveContainer" containerID="a8e9caf6e39196ec92a014427023de95e142cf4850d65e3ee7098c515370b27b" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.548337 4739 scope.go:117] "RemoveContainer" containerID="50d05f03f720af7c93636914d1c590aa30bf94e8f4d51a72d3c27191376e94e2" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.582046 4739 scope.go:117] "RemoveContainer" containerID="5737c6a9e8db5e392a7a9da187f639727602f93c4c9f19c9b11ba4c41ca4ee61" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.602013 4739 scope.go:117] "RemoveContainer" containerID="f1e666a054433ebfa0b65d3e054fd70294ddc2c1c1618fe385559dc99c64e8ff" Jan 21 15:56:55 crc kubenswrapper[4739]: I0121 15:56:55.474235 4739 generic.go:334] "Generic (PLEG): container finished" podID="0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" containerID="51d07f40482acab81b9632173fbbbfe5bbb70a28e7ce9e1f858999b12a002abd" exitCode=0 Jan 21 15:56:55 crc kubenswrapper[4739]: I0121 15:56:55.474489 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" event={"ID":"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953","Type":"ContainerDied","Data":"51d07f40482acab81b9632173fbbbfe5bbb70a28e7ce9e1f858999b12a002abd"} Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.602342 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.756169 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam\") pod \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.756533 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory\") pod \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.756639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle\") pod \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.756720 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdrc9\" (UniqueName: \"kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9\") pod \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.762682 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9" (OuterVolumeSpecName: "kube-api-access-rdrc9") pod "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" (UID: "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953"). InnerVolumeSpecName "kube-api-access-rdrc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.762896 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" (UID: "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.783799 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:56:57 crc kubenswrapper[4739]: E0121 15:56:57.784586 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.788169 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory" (OuterVolumeSpecName: "inventory") pod "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" (UID: "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.789228 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" (UID: "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.860612 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.860663 4739 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.860679 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdrc9\" (UniqueName: \"kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.860693 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.507326 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" event={"ID":"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953","Type":"ContainerDied","Data":"d632ebf7f70ccf3c830bb996407d7bbfc55e89dfd3fcdba0d66d6cceb37779bb"} Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.507743 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d632ebf7f70ccf3c830bb996407d7bbfc55e89dfd3fcdba0d66d6cceb37779bb" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.507394 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.716623 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c"] Jan 21 15:56:58 crc kubenswrapper[4739]: E0121 15:56:58.717011 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="registry-server" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717023 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="registry-server" Jan 21 15:56:58 crc kubenswrapper[4739]: E0121 15:56:58.717033 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717040 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 15:56:58 crc kubenswrapper[4739]: E0121 15:56:58.717059 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="extract-content" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717066 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="extract-content" Jan 21 15:56:58 crc kubenswrapper[4739]: E0121 15:56:58.717083 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="extract-utilities" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717089 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="extract-utilities" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717273 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717294 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="registry-server" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717899 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.722288 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.722380 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.723027 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.723172 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.737875 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c"] Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.782769 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.782861 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.782978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69tlc\" (UniqueName: \"kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.884869 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69tlc\" (UniqueName: \"kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.885136 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.885361 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.893559 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.893559 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.906665 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69tlc\" (UniqueName: \"kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:59 crc kubenswrapper[4739]: I0121 15:56:59.036657 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:59 crc kubenswrapper[4739]: I0121 15:56:59.791737 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c"] Jan 21 15:56:59 crc kubenswrapper[4739]: W0121 15:56:59.803527 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod294dabba_e6ac_404b_a3d4_0819c7baac6d.slice/crio-632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538 WatchSource:0}: Error finding container 632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538: Status 404 returned error can't find the container with id 632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538 Jan 21 15:57:00 crc kubenswrapper[4739]: I0121 15:57:00.529206 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" event={"ID":"294dabba-e6ac-404b-a3d4-0819c7baac6d","Type":"ContainerStarted","Data":"632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538"} Jan 21 15:57:01 crc kubenswrapper[4739]: I0121 15:57:01.539365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" event={"ID":"294dabba-e6ac-404b-a3d4-0819c7baac6d","Type":"ContainerStarted","Data":"6ae8ebe0c529ae5370d5424cf29d3054323518397bc066b646d3ef1294f7be71"} Jan 21 15:57:12 crc kubenswrapper[4739]: I0121 15:57:12.782344 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:57:12 crc kubenswrapper[4739]: E0121 15:57:12.783101 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:57:27 crc kubenswrapper[4739]: I0121 15:57:27.783315 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:57:27 crc kubenswrapper[4739]: E0121 15:57:27.784085 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:57:33 crc kubenswrapper[4739]: I0121 15:57:33.047664 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" podStartSLOduration=33.791304195 podStartE2EDuration="35.047643059s" podCreationTimestamp="2026-01-21 15:56:58 +0000 UTC" firstStartedPulling="2026-01-21 15:56:59.807005202 +0000 UTC m=+1851.497711466" lastFinishedPulling="2026-01-21 15:57:01.063344066 +0000 UTC m=+1852.754050330" observedRunningTime="2026-01-21 15:57:01.562611359 +0000 UTC m=+1853.253317613" watchObservedRunningTime="2026-01-21 15:57:33.047643059 +0000 UTC m=+1884.738349353" Jan 21 15:57:33 crc kubenswrapper[4739]: I0121 15:57:33.058644 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-xwk5p"] Jan 21 15:57:33 crc kubenswrapper[4739]: I0121 15:57:33.068437 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kdx4k"] Jan 21 15:57:33 crc kubenswrapper[4739]: I0121 15:57:33.081742 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-xwk5p"] Jan 21 15:57:33 crc kubenswrapper[4739]: I0121 15:57:33.081807 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kdx4k"] Jan 21 15:57:34 crc kubenswrapper[4739]: I0121 15:57:34.795416 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b853447-6a81-4b1e-b26c-cefc48c32a81" path="/var/lib/kubelet/pods/3b853447-6a81-4b1e-b26c-cefc48c32a81/volumes" Jan 21 15:57:34 crc kubenswrapper[4739]: I0121 15:57:34.796893 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84721a4-d079-460e-8fc5-064ea758d676" path="/var/lib/kubelet/pods/d84721a4-d079-460e-8fc5-064ea758d676/volumes" Jan 21 15:57:41 crc kubenswrapper[4739]: I0121 15:57:41.784243 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:57:42 crc kubenswrapper[4739]: I0121 15:57:42.879397 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f"} Jan 21 15:57:46 crc kubenswrapper[4739]: I0121 15:57:46.025393 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-96lt9"] Jan 21 15:57:46 crc kubenswrapper[4739]: I0121 15:57:46.032058 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-96lt9"] Jan 21 15:57:46 crc kubenswrapper[4739]: I0121 15:57:46.796763 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" path="/var/lib/kubelet/pods/a80f8b10-47b3-4590-95be-4468cea2f9c0/volumes" Jan 21 15:57:47 crc kubenswrapper[4739]: I0121 15:57:47.032232 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jp27h"] Jan 21 15:57:47 crc kubenswrapper[4739]: I0121 15:57:47.044284 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jp27h"] Jan 21 15:57:47 crc kubenswrapper[4739]: I0121 15:57:47.890524 4739 scope.go:117] "RemoveContainer" containerID="71310695c2accfa3e4a3d2aec57ac7da81de4787cbc5f9e497bf705de369d619" Jan 21 15:57:47 crc kubenswrapper[4739]: I0121 15:57:47.929399 4739 scope.go:117] "RemoveContainer" containerID="a1a4d3d9065a56e43fab1158e27671c9ee273058ec06016997bfb034518c2cec" Jan 21 15:57:47 crc kubenswrapper[4739]: I0121 15:57:47.962588 4739 scope.go:117] "RemoveContainer" containerID="c5191c489da39b3d63d1ce6095ac375b0c57a0b0c80cbb56abcdfe58ddbad922" Jan 21 15:57:48 crc kubenswrapper[4739]: I0121 15:57:48.791779 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" path="/var/lib/kubelet/pods/1f3d6499-baea-49df-8dab-393a192e0a6b/volumes" Jan 21 15:57:52 crc kubenswrapper[4739]: I0121 15:57:52.043859 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-gj9fz"] Jan 21 15:57:52 crc kubenswrapper[4739]: I0121 15:57:52.054295 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-gj9fz"] Jan 21 15:57:52 crc kubenswrapper[4739]: I0121 15:57:52.794340 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" path="/var/lib/kubelet/pods/34449cf3-049d-453b-ab88-ab40fdc25d6c/volumes" Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.049640 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-x8jnb"] Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.062830 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3fec-account-create-update-9ktbn"] Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.075751 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-x8jnb"] Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.084596 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3fec-account-create-update-9ktbn"] Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.553478 4739 generic.go:334] "Generic (PLEG): container finished" podID="294dabba-e6ac-404b-a3d4-0819c7baac6d" containerID="6ae8ebe0c529ae5370d5424cf29d3054323518397bc066b646d3ef1294f7be71" exitCode=0 Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.553520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" event={"ID":"294dabba-e6ac-404b-a3d4-0819c7baac6d","Type":"ContainerDied","Data":"6ae8ebe0c529ae5370d5424cf29d3054323518397bc066b646d3ef1294f7be71"} Jan 21 15:58:40 crc kubenswrapper[4739]: I0121 15:58:40.024123 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kzsmk"] Jan 21 15:58:40 crc kubenswrapper[4739]: I0121 15:58:40.040700 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kzsmk"] Jan 21 15:58:40 crc kubenswrapper[4739]: I0121 15:58:40.798036 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eda7c2f-1cb1-4fcc-840b-16699d95e267" path="/var/lib/kubelet/pods/8eda7c2f-1cb1-4fcc-840b-16699d95e267/volumes" Jan 21 15:58:40 crc kubenswrapper[4739]: I0121 15:58:40.798995 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" path="/var/lib/kubelet/pods/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a/volumes" Jan 21 15:58:40 crc kubenswrapper[4739]: I0121 15:58:40.799622 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47244c1-eeda-40a8-b4ae-57e2d6175c7e" path="/var/lib/kubelet/pods/f47244c1-eeda-40a8-b4ae-57e2d6175c7e/volumes" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.029911 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.039546 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-crxtp"] Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.048956 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-crxtp"] Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.152560 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam\") pod \"294dabba-e6ac-404b-a3d4-0819c7baac6d\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.152725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory\") pod \"294dabba-e6ac-404b-a3d4-0819c7baac6d\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.152800 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69tlc\" (UniqueName: \"kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc\") pod \"294dabba-e6ac-404b-a3d4-0819c7baac6d\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.158044 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc" (OuterVolumeSpecName: "kube-api-access-69tlc") pod "294dabba-e6ac-404b-a3d4-0819c7baac6d" (UID: "294dabba-e6ac-404b-a3d4-0819c7baac6d"). InnerVolumeSpecName "kube-api-access-69tlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.178352 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory" (OuterVolumeSpecName: "inventory") pod "294dabba-e6ac-404b-a3d4-0819c7baac6d" (UID: "294dabba-e6ac-404b-a3d4-0819c7baac6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.183877 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "294dabba-e6ac-404b-a3d4-0819c7baac6d" (UID: "294dabba-e6ac-404b-a3d4-0819c7baac6d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.254746 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69tlc\" (UniqueName: \"kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.254778 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.254790 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.569589 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" event={"ID":"294dabba-e6ac-404b-a3d4-0819c7baac6d","Type":"ContainerDied","Data":"632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538"} Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.569636 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.569654 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.673063 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr"] Jan 21 15:58:41 crc kubenswrapper[4739]: E0121 15:58:41.673493 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294dabba-e6ac-404b-a3d4-0819c7baac6d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.673515 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="294dabba-e6ac-404b-a3d4-0819c7baac6d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.673705 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="294dabba-e6ac-404b-a3d4-0819c7baac6d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.674425 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.676678 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.677088 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.680489 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.684036 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.690884 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr"] Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.763919 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjnm4\" (UniqueName: \"kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.764283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.764433 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.818030 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.820086 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.838569 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.869132 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.869251 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjnm4\" (UniqueName: \"kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.869388 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.892967 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.894060 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.905921 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjnm4\" (UniqueName: \"kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.971375 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-966cs\" (UniqueName: \"kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.971621 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.972331 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.991758 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.075386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-966cs\" (UniqueName: \"kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.075479 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.075501 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.076199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.076373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.097562 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-966cs\" (UniqueName: \"kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.146489 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.484912 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.595446 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerStarted","Data":"8ea15aa9a539701f321e754b7aae844cf3b2a77d41a2ff608f457b83b290454e"} Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.671866 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr"] Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.689651 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.797046 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe9459ad-de74-49f2-b35f-040c2b873848" path="/var/lib/kubelet/pods/fe9459ad-de74-49f2-b35f-040c2b873848/volumes" Jan 21 15:58:42 crc kubenswrapper[4739]: E0121 15:58:42.825952 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podade1ee36_99f9_48e2_ab57_0b1e9f38331f.slice/crio-a340ec220d78ad84ca0fec3f094612f44a2f6db873842f749e40d1c46d4a6d43.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podade1ee36_99f9_48e2_ab57_0b1e9f38331f.slice/crio-conmon-a340ec220d78ad84ca0fec3f094612f44a2f6db873842f749e40d1c46d4a6d43.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.028616 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-r5znj"] Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.035957 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-r5znj"] Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.605701 4739 generic.go:334] "Generic (PLEG): container finished" podID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerID="a340ec220d78ad84ca0fec3f094612f44a2f6db873842f749e40d1c46d4a6d43" exitCode=0 Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.606094 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerDied","Data":"a340ec220d78ad84ca0fec3f094612f44a2f6db873842f749e40d1c46d4a6d43"} Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.607893 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" event={"ID":"94267df6-5e7f-4409-a219-d42dabb28d43","Type":"ContainerStarted","Data":"13e9cf0c879079f40a5f006abaf118346c98a33dca8ecefbb4ee7b456d3030bd"} Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.607938 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" event={"ID":"94267df6-5e7f-4409-a219-d42dabb28d43","Type":"ContainerStarted","Data":"02eac3e1ba7e957947b42f6c4a0a671a81e8b2a8f5e4f424224eef41202158f3"} Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.647264 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" podStartSLOduration=2.180807104 podStartE2EDuration="2.647244696s" podCreationTimestamp="2026-01-21 15:58:41 +0000 UTC" firstStartedPulling="2026-01-21 15:58:42.689254102 +0000 UTC m=+1954.379960366" lastFinishedPulling="2026-01-21 15:58:43.155691704 +0000 UTC m=+1954.846397958" observedRunningTime="2026-01-21 15:58:43.640631607 +0000 UTC m=+1955.331337871" watchObservedRunningTime="2026-01-21 15:58:43.647244696 +0000 UTC m=+1955.337950960" Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.028212 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ade4-account-create-update-24sls"] Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.036500 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5cdc-account-create-update-hvq6k"] Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.044363 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ade4-account-create-update-24sls"] Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.053283 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5cdc-account-create-update-hvq6k"] Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.797654 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed41032-b872-4711-ab4c-79ed5f33053f" path="/var/lib/kubelet/pods/5ed41032-b872-4711-ab4c-79ed5f33053f/volumes" Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.798533 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1635150-ea8b-4b37-b129-7ade970b52ee" path="/var/lib/kubelet/pods/b1635150-ea8b-4b37-b129-7ade970b52ee/volumes" Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.799939 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deda4862-d2cc-41a1-b82f-067b3c4ad84f" path="/var/lib/kubelet/pods/deda4862-d2cc-41a1-b82f-067b3c4ad84f/volumes" Jan 21 15:58:45 crc kubenswrapper[4739]: I0121 15:58:45.649335 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerStarted","Data":"d10deccc4a9304d76571ba2428a16818b831ade2bc1af262379f41e9129d6c84"} Jan 21 15:58:47 crc kubenswrapper[4739]: I0121 15:58:47.684344 4739 generic.go:334] "Generic (PLEG): container finished" podID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerID="d10deccc4a9304d76571ba2428a16818b831ade2bc1af262379f41e9129d6c84" exitCode=0 Jan 21 15:58:47 crc kubenswrapper[4739]: I0121 15:58:47.684411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerDied","Data":"d10deccc4a9304d76571ba2428a16818b831ade2bc1af262379f41e9129d6c84"} Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.112973 4739 scope.go:117] "RemoveContainer" containerID="4b136cc5189c87022119314f55ea87e4885fcfc281f69cf42c236783e38ab3f6" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.159055 4739 scope.go:117] "RemoveContainer" containerID="79bfce8d9538722cfd4c3baeb131299242c4ac6e8900225e7fee9d8ed4de0466" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.194032 4739 scope.go:117] "RemoveContainer" containerID="e709a72658fab4553eb9d8c4b54807d7e274d682b97947cce8b032c1091184df" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.237087 4739 scope.go:117] "RemoveContainer" containerID="0c32e58de73231bba5d6cc2ab8080acddef62c83c50117e1a0a01fd39c99c056" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.278018 4739 scope.go:117] "RemoveContainer" containerID="e048ca2c679bb07c831356312120f78939de952de42f3923e2d50d5db0fc8aa5" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.322841 4739 scope.go:117] "RemoveContainer" containerID="6ed86ff4645a0717cf253d999a5012187a4891a7826b6fe88297ab0c2a16d7ac" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.367697 4739 scope.go:117] "RemoveContainer" containerID="69e4d5b920517ef58ac5d3dac008032896abf337574869aeeb467435766327e2" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.384934 4739 scope.go:117] "RemoveContainer" containerID="b2a14f9f0596b7114bc9be07e6d7387e73ae65d715e86a7eab8f4b3ca063b86f" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.424915 4739 scope.go:117] "RemoveContainer" containerID="10e787fa4b25bc22cc6d7eb0721fc3f49823272ed21a586f41a31d2d0cb97efe" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.697497 4739 generic.go:334] "Generic (PLEG): container finished" podID="94267df6-5e7f-4409-a219-d42dabb28d43" containerID="13e9cf0c879079f40a5f006abaf118346c98a33dca8ecefbb4ee7b456d3030bd" exitCode=0 Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.697735 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" event={"ID":"94267df6-5e7f-4409-a219-d42dabb28d43","Type":"ContainerDied","Data":"13e9cf0c879079f40a5f006abaf118346c98a33dca8ecefbb4ee7b456d3030bd"} Jan 21 15:58:49 crc kubenswrapper[4739]: I0121 15:58:49.709654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerStarted","Data":"d6bc5d2662932c269b20f9830d5491acbd51d5b4754e5cb1c77c74084dd5223c"} Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.168290 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.193506 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-295lt" podStartSLOduration=4.18075682 podStartE2EDuration="9.193481226s" podCreationTimestamp="2026-01-21 15:58:41 +0000 UTC" firstStartedPulling="2026-01-21 15:58:43.60867927 +0000 UTC m=+1955.299385534" lastFinishedPulling="2026-01-21 15:58:48.621403666 +0000 UTC m=+1960.312109940" observedRunningTime="2026-01-21 15:58:49.739554174 +0000 UTC m=+1961.430260448" watchObservedRunningTime="2026-01-21 15:58:50.193481226 +0000 UTC m=+1961.884187490" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.329963 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam\") pod \"94267df6-5e7f-4409-a219-d42dabb28d43\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.330089 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjnm4\" (UniqueName: \"kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4\") pod \"94267df6-5e7f-4409-a219-d42dabb28d43\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.330203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory\") pod \"94267df6-5e7f-4409-a219-d42dabb28d43\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.336959 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4" (OuterVolumeSpecName: "kube-api-access-pjnm4") pod "94267df6-5e7f-4409-a219-d42dabb28d43" (UID: "94267df6-5e7f-4409-a219-d42dabb28d43"). InnerVolumeSpecName "kube-api-access-pjnm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.355883 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "94267df6-5e7f-4409-a219-d42dabb28d43" (UID: "94267df6-5e7f-4409-a219-d42dabb28d43"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.371577 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory" (OuterVolumeSpecName: "inventory") pod "94267df6-5e7f-4409-a219-d42dabb28d43" (UID: "94267df6-5e7f-4409-a219-d42dabb28d43"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.432158 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.432384 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjnm4\" (UniqueName: \"kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.432481 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.717900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" event={"ID":"94267df6-5e7f-4409-a219-d42dabb28d43","Type":"ContainerDied","Data":"02eac3e1ba7e957947b42f6c4a0a671a81e8b2a8f5e4f424224eef41202158f3"} Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.718849 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02eac3e1ba7e957947b42f6c4a0a671a81e8b2a8f5e4f424224eef41202158f3" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.717961 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.793457 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg"] Jan 21 15:58:50 crc kubenswrapper[4739]: E0121 15:58:50.793752 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94267df6-5e7f-4409-a219-d42dabb28d43" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.793769 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="94267df6-5e7f-4409-a219-d42dabb28d43" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.793951 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="94267df6-5e7f-4409-a219-d42dabb28d43" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.794480 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.797924 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.798255 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.798601 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.801387 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.807053 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg"] Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.941091 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.941208 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.941240 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2k58\" (UniqueName: \"kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.042613 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2k58\" (UniqueName: \"kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.043069 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.043195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.046439 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.047548 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.076383 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2k58\" (UniqueName: \"kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.112005 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.644857 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg"] Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.748773 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" event={"ID":"ffbf410d-034d-4e44-a4fe-7146838c4cce","Type":"ContainerStarted","Data":"f250d088ffac6f6c4ca343ff36984208bb82041b490cf90f53747b3ac0259fdf"} Jan 21 15:58:52 crc kubenswrapper[4739]: I0121 15:58:52.148881 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:52 crc kubenswrapper[4739]: I0121 15:58:52.148935 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:52 crc kubenswrapper[4739]: I0121 15:58:52.206365 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.619054 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.621438 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.639734 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.690008 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.690092 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk7kt\" (UniqueName: \"kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.690140 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.792029 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.792133 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk7kt\" (UniqueName: \"kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.792174 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.792558 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.792615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.823020 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk7kt\" (UniqueName: \"kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.950058 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:54 crc kubenswrapper[4739]: I0121 15:58:54.416426 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:58:54 crc kubenswrapper[4739]: I0121 15:58:54.770615 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerStarted","Data":"cc3e195bf8be94ce08483714a927b9ae814a971b4cb47c104657e649791610ab"} Jan 21 15:58:56 crc kubenswrapper[4739]: I0121 15:58:56.797697 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" event={"ID":"ffbf410d-034d-4e44-a4fe-7146838c4cce","Type":"ContainerStarted","Data":"6aeb9960f615cc606b40429ab7fe43ecb9e61b07f34a7e412504580614aecdcb"} Jan 21 15:58:56 crc kubenswrapper[4739]: I0121 15:58:56.798233 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerDied","Data":"623c55ee09ae6a2a81bc38e0febc5d988327060002b8a8d627e889de38597bdf"} Jan 21 15:58:56 crc kubenswrapper[4739]: I0121 15:58:56.798934 4739 generic.go:334] "Generic (PLEG): container finished" podID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerID="623c55ee09ae6a2a81bc38e0febc5d988327060002b8a8d627e889de38597bdf" exitCode=0 Jan 21 15:58:56 crc kubenswrapper[4739]: I0121 15:58:56.854491 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" podStartSLOduration=2.285904731 podStartE2EDuration="6.854470299s" podCreationTimestamp="2026-01-21 15:58:50 +0000 UTC" firstStartedPulling="2026-01-21 15:58:51.657195148 +0000 UTC m=+1963.347901432" lastFinishedPulling="2026-01-21 15:58:56.225760736 +0000 UTC m=+1967.916467000" observedRunningTime="2026-01-21 15:58:56.825706048 +0000 UTC m=+1968.516412312" watchObservedRunningTime="2026-01-21 15:58:56.854470299 +0000 UTC m=+1968.545176573" Jan 21 15:58:58 crc kubenswrapper[4739]: I0121 15:58:58.813091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerStarted","Data":"16e70a4ccb64004121c797f411bb43ab98bba9a3655f4c430e0964a455dacc5a"} Jan 21 15:59:02 crc kubenswrapper[4739]: I0121 15:59:02.197545 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:59:02 crc kubenswrapper[4739]: I0121 15:59:02.243677 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:59:02 crc kubenswrapper[4739]: I0121 15:59:02.850307 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-295lt" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="registry-server" containerID="cri-o://d6bc5d2662932c269b20f9830d5491acbd51d5b4754e5cb1c77c74084dd5223c" gracePeriod=2 Jan 21 15:59:03 crc kubenswrapper[4739]: I0121 15:59:03.860844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerDied","Data":"d6bc5d2662932c269b20f9830d5491acbd51d5b4754e5cb1c77c74084dd5223c"} Jan 21 15:59:03 crc kubenswrapper[4739]: I0121 15:59:03.860793 4739 generic.go:334] "Generic (PLEG): container finished" podID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerID="d6bc5d2662932c269b20f9830d5491acbd51d5b4754e5cb1c77c74084dd5223c" exitCode=0 Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.069753 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.230881 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities\") pod \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.231337 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content\") pod \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.231558 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-966cs\" (UniqueName: \"kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs\") pod \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.231690 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities" (OuterVolumeSpecName: "utilities") pod "ade1ee36-99f9-48e2-ab57-0b1e9f38331f" (UID: "ade1ee36-99f9-48e2-ab57-0b1e9f38331f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.232192 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.237035 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs" (OuterVolumeSpecName: "kube-api-access-966cs") pod "ade1ee36-99f9-48e2-ab57-0b1e9f38331f" (UID: "ade1ee36-99f9-48e2-ab57-0b1e9f38331f"). InnerVolumeSpecName "kube-api-access-966cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.254052 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ade1ee36-99f9-48e2-ab57-0b1e9f38331f" (UID: "ade1ee36-99f9-48e2-ab57-0b1e9f38331f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.334148 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.334186 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-966cs\" (UniqueName: \"kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.888982 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerDied","Data":"8ea15aa9a539701f321e754b7aae844cf3b2a77d41a2ff608f457b83b290454e"} Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.889029 4739 scope.go:117] "RemoveContainer" containerID="d6bc5d2662932c269b20f9830d5491acbd51d5b4754e5cb1c77c74084dd5223c" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.889139 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.910600 4739 scope.go:117] "RemoveContainer" containerID="d10deccc4a9304d76571ba2428a16818b831ade2bc1af262379f41e9129d6c84" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.917745 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.924114 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.941142 4739 scope.go:117] "RemoveContainer" containerID="a340ec220d78ad84ca0fec3f094612f44a2f6db873842f749e40d1c46d4a6d43" Jan 21 15:59:08 crc kubenswrapper[4739]: I0121 15:59:08.794572 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" path="/var/lib/kubelet/pods/ade1ee36-99f9-48e2-ab57-0b1e9f38331f/volumes" Jan 21 15:59:10 crc kubenswrapper[4739]: E0121 15:59:10.101804 4739 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.319s" Jan 21 15:59:10 crc kubenswrapper[4739]: I0121 15:59:10.357576 4739 trace.go:236] Trace[934113536]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-operators-9kr85" (21-Jan-2026 15:59:08.747) (total time: 1609ms): Jan 21 15:59:10 crc kubenswrapper[4739]: Trace[934113536]: [1.609921708s] [1.609921708s] END Jan 21 15:59:12 crc kubenswrapper[4739]: I0121 15:59:12.085032 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4cfnm" podUID="de79a4b1-6301-4c43-ae80-14834d2d7b54" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:59:15 crc kubenswrapper[4739]: I0121 15:59:15.974081 4739 generic.go:334] "Generic (PLEG): container finished" podID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerID="16e70a4ccb64004121c797f411bb43ab98bba9a3655f4c430e0964a455dacc5a" exitCode=0 Jan 21 15:59:15 crc kubenswrapper[4739]: I0121 15:59:15.974159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerDied","Data":"16e70a4ccb64004121c797f411bb43ab98bba9a3655f4c430e0964a455dacc5a"} Jan 21 15:59:17 crc kubenswrapper[4739]: I0121 15:59:17.994789 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerStarted","Data":"c2bcbac7359ac6502ea27c569ed0d2972aaf56d2b613afabcbb44f80ad598670"} Jan 21 15:59:18 crc kubenswrapper[4739]: I0121 15:59:18.021448 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9kr85" podStartSLOduration=4.475177134 podStartE2EDuration="25.021424139s" podCreationTimestamp="2026-01-21 15:58:53 +0000 UTC" firstStartedPulling="2026-01-21 15:58:56.799511948 +0000 UTC m=+1968.490218212" lastFinishedPulling="2026-01-21 15:59:17.345758953 +0000 UTC m=+1989.036465217" observedRunningTime="2026-01-21 15:59:18.013072392 +0000 UTC m=+1989.703778656" watchObservedRunningTime="2026-01-21 15:59:18.021424139 +0000 UTC m=+1989.712130413" Jan 21 15:59:23 crc kubenswrapper[4739]: I0121 15:59:23.951185 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:23 crc kubenswrapper[4739]: I0121 15:59:23.951782 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:25 crc kubenswrapper[4739]: I0121 15:59:25.005418 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9kr85" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="registry-server" probeResult="failure" output=< Jan 21 15:59:25 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 15:59:25 crc kubenswrapper[4739]: > Jan 21 15:59:34 crc kubenswrapper[4739]: I0121 15:59:34.004680 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:34 crc kubenswrapper[4739]: I0121 15:59:34.058860 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:34 crc kubenswrapper[4739]: I0121 15:59:34.244159 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:59:35 crc kubenswrapper[4739]: I0121 15:59:35.143707 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9kr85" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="registry-server" containerID="cri-o://c2bcbac7359ac6502ea27c569ed0d2972aaf56d2b613afabcbb44f80ad598670" gracePeriod=2 Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.154342 4739 generic.go:334] "Generic (PLEG): container finished" podID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerID="c2bcbac7359ac6502ea27c569ed0d2972aaf56d2b613afabcbb44f80ad598670" exitCode=0 Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.154584 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerDied","Data":"c2bcbac7359ac6502ea27c569ed0d2972aaf56d2b613afabcbb44f80ad598670"} Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.385131 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.449516 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content\") pod \"91784378-f2e5-4c19-b0a5-3406081b2a22\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.449742 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk7kt\" (UniqueName: \"kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt\") pod \"91784378-f2e5-4c19-b0a5-3406081b2a22\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.449772 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities\") pod \"91784378-f2e5-4c19-b0a5-3406081b2a22\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.451186 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities" (OuterVolumeSpecName: "utilities") pod "91784378-f2e5-4c19-b0a5-3406081b2a22" (UID: "91784378-f2e5-4c19-b0a5-3406081b2a22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.463615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt" (OuterVolumeSpecName: "kube-api-access-tk7kt") pod "91784378-f2e5-4c19-b0a5-3406081b2a22" (UID: "91784378-f2e5-4c19-b0a5-3406081b2a22"). InnerVolumeSpecName "kube-api-access-tk7kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.552622 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk7kt\" (UniqueName: \"kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.552904 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.592505 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91784378-f2e5-4c19-b0a5-3406081b2a22" (UID: "91784378-f2e5-4c19-b0a5-3406081b2a22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.654429 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.193233 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerDied","Data":"cc3e195bf8be94ce08483714a927b9ae814a971b4cb47c104657e649791610ab"} Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.193335 4739 scope.go:117] "RemoveContainer" containerID="c2bcbac7359ac6502ea27c569ed0d2972aaf56d2b613afabcbb44f80ad598670" Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.194024 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.240303 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.240695 4739 scope.go:117] "RemoveContainer" containerID="16e70a4ccb64004121c797f411bb43ab98bba9a3655f4c430e0964a455dacc5a" Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.255408 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.320624 4739 scope.go:117] "RemoveContainer" containerID="623c55ee09ae6a2a81bc38e0febc5d988327060002b8a8d627e889de38597bdf" Jan 21 15:59:38 crc kubenswrapper[4739]: I0121 15:59:38.794273 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" path="/var/lib/kubelet/pods/91784378-f2e5-4c19-b0a5-3406081b2a22/volumes" Jan 21 15:59:44 crc kubenswrapper[4739]: E0121 15:59:44.189338 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffbf410d_034d_4e44_a4fe_7146838c4cce.slice/crio-conmon-6aeb9960f615cc606b40429ab7fe43ecb9e61b07f34a7e412504580614aecdcb.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:59:44 crc kubenswrapper[4739]: I0121 15:59:44.254394 4739 generic.go:334] "Generic (PLEG): container finished" podID="ffbf410d-034d-4e44-a4fe-7146838c4cce" containerID="6aeb9960f615cc606b40429ab7fe43ecb9e61b07f34a7e412504580614aecdcb" exitCode=0 Jan 21 15:59:44 crc kubenswrapper[4739]: I0121 15:59:44.254437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" event={"ID":"ffbf410d-034d-4e44-a4fe-7146838c4cce","Type":"ContainerDied","Data":"6aeb9960f615cc606b40429ab7fe43ecb9e61b07f34a7e412504580614aecdcb"} Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.668372 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.729653 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2k58\" (UniqueName: \"kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58\") pod \"ffbf410d-034d-4e44-a4fe-7146838c4cce\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.729781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam\") pod \"ffbf410d-034d-4e44-a4fe-7146838c4cce\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.729861 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory\") pod \"ffbf410d-034d-4e44-a4fe-7146838c4cce\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.739080 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58" (OuterVolumeSpecName: "kube-api-access-v2k58") pod "ffbf410d-034d-4e44-a4fe-7146838c4cce" (UID: "ffbf410d-034d-4e44-a4fe-7146838c4cce"). InnerVolumeSpecName "kube-api-access-v2k58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.760382 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory" (OuterVolumeSpecName: "inventory") pod "ffbf410d-034d-4e44-a4fe-7146838c4cce" (UID: "ffbf410d-034d-4e44-a4fe-7146838c4cce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.762104 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ffbf410d-034d-4e44-a4fe-7146838c4cce" (UID: "ffbf410d-034d-4e44-a4fe-7146838c4cce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.832430 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2k58\" (UniqueName: \"kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.832487 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.832502 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.276890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" event={"ID":"ffbf410d-034d-4e44-a4fe-7146838c4cce","Type":"ContainerDied","Data":"f250d088ffac6f6c4ca343ff36984208bb82041b490cf90f53747b3ac0259fdf"} Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.276941 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.276953 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f250d088ffac6f6c4ca343ff36984208bb82041b490cf90f53747b3ac0259fdf" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.410495 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm"] Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.410932 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="extract-utilities" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.410954 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="extract-utilities" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.410976 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="extract-content" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.410984 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="extract-content" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.410997 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411005 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.411016 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffbf410d-034d-4e44-a4fe-7146838c4cce" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411026 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffbf410d-034d-4e44-a4fe-7146838c4cce" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.411042 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="extract-content" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411049 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="extract-content" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.411062 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411068 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.411100 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="extract-utilities" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411107 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="extract-utilities" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411301 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411329 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411342 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffbf410d-034d-4e44-a4fe-7146838c4cce" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.412294 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.416300 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.416477 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.417483 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.425756 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm"] Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.429470 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.552752 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.553215 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.553270 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.655020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.655079 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.655126 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.660114 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.665623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.671328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.730107 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:47 crc kubenswrapper[4739]: I0121 15:59:47.226507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm"] Jan 21 15:59:47 crc kubenswrapper[4739]: I0121 15:59:47.287051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" event={"ID":"740d6fa5-02d2-47b9-9d55-1cc790a3edad","Type":"ContainerStarted","Data":"187a7f26e372203bb1849c5b8ef78ef247bc9954e8be94b586f662aac790146f"} Jan 21 15:59:49 crc kubenswrapper[4739]: I0121 15:59:49.309668 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" event={"ID":"740d6fa5-02d2-47b9-9d55-1cc790a3edad","Type":"ContainerStarted","Data":"0065bcb6c308587c25b6b08589f22df1b02c5687fc1714c16c0a487c9d15d5b8"} Jan 21 15:59:49 crc kubenswrapper[4739]: I0121 15:59:49.335839 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" podStartSLOduration=2.138832247 podStartE2EDuration="3.335792049s" podCreationTimestamp="2026-01-21 15:59:46 +0000 UTC" firstStartedPulling="2026-01-21 15:59:47.233672756 +0000 UTC m=+2018.924379020" lastFinishedPulling="2026-01-21 15:59:48.430632558 +0000 UTC m=+2020.121338822" observedRunningTime="2026-01-21 15:59:49.326515169 +0000 UTC m=+2021.017221423" watchObservedRunningTime="2026-01-21 15:59:49.335792049 +0000 UTC m=+2021.026498313" Jan 21 15:59:53 crc kubenswrapper[4739]: I0121 15:59:53.347918 4739 generic.go:334] "Generic (PLEG): container finished" podID="740d6fa5-02d2-47b9-9d55-1cc790a3edad" containerID="0065bcb6c308587c25b6b08589f22df1b02c5687fc1714c16c0a487c9d15d5b8" exitCode=0 Jan 21 15:59:53 crc kubenswrapper[4739]: I0121 15:59:53.348011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" event={"ID":"740d6fa5-02d2-47b9-9d55-1cc790a3edad","Type":"ContainerDied","Data":"0065bcb6c308587c25b6b08589f22df1b02c5687fc1714c16c0a487c9d15d5b8"} Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.044788 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfndp"] Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.055631 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfndp"] Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.794552 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.794757 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" path="/var/lib/kubelet/pods/7f2f9172-8721-4518-ac4e-eec07c9fe663/volumes" Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.914198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam\") pod \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.914273 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory\") pod \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.914620 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6\") pod \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.922103 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6" (OuterVolumeSpecName: "kube-api-access-8gdn6") pod "740d6fa5-02d2-47b9-9d55-1cc790a3edad" (UID: "740d6fa5-02d2-47b9-9d55-1cc790a3edad"). InnerVolumeSpecName "kube-api-access-8gdn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.986981 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory" (OuterVolumeSpecName: "inventory") pod "740d6fa5-02d2-47b9-9d55-1cc790a3edad" (UID: "740d6fa5-02d2-47b9-9d55-1cc790a3edad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.987672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "740d6fa5-02d2-47b9-9d55-1cc790a3edad" (UID: "740d6fa5-02d2-47b9-9d55-1cc790a3edad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.017192 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.017432 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.017506 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.363997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" event={"ID":"740d6fa5-02d2-47b9-9d55-1cc790a3edad","Type":"ContainerDied","Data":"187a7f26e372203bb1849c5b8ef78ef247bc9954e8be94b586f662aac790146f"} Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.364036 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187a7f26e372203bb1849c5b8ef78ef247bc9954e8be94b586f662aac790146f" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.364349 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.443419 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh"] Jan 21 15:59:55 crc kubenswrapper[4739]: E0121 15:59:55.443949 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740d6fa5-02d2-47b9-9d55-1cc790a3edad" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.444018 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="740d6fa5-02d2-47b9-9d55-1cc790a3edad" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.444234 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="740d6fa5-02d2-47b9-9d55-1cc790a3edad" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.444919 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.447955 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.448671 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.448883 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.449073 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.465177 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh"] Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.526943 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.527195 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.527311 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-754fx\" (UniqueName: \"kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.629186 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.629254 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.629297 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-754fx\" (UniqueName: \"kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.633455 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.634306 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.649501 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-754fx\" (UniqueName: \"kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.770628 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:56 crc kubenswrapper[4739]: I0121 15:59:56.308726 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh"] Jan 21 15:59:56 crc kubenswrapper[4739]: I0121 15:59:56.373017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" event={"ID":"71e02623-c543-47f0-8acc-cbf7a605ed34","Type":"ContainerStarted","Data":"ab1d33c40e007cf9bb92442625334c8351ea86da0978e0055181b67fca07644d"} Jan 21 15:59:57 crc kubenswrapper[4739]: I0121 15:59:57.383932 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" event={"ID":"71e02623-c543-47f0-8acc-cbf7a605ed34","Type":"ContainerStarted","Data":"f815cbd4af2807d57aa7a3d16da322283c828b8a3f9071e839088ef748d47627"} Jan 21 15:59:57 crc kubenswrapper[4739]: I0121 15:59:57.405373 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" podStartSLOduration=1.8056255650000002 podStartE2EDuration="2.405354509s" podCreationTimestamp="2026-01-21 15:59:55 +0000 UTC" firstStartedPulling="2026-01-21 15:59:56.328880413 +0000 UTC m=+2028.019586677" lastFinishedPulling="2026-01-21 15:59:56.928609357 +0000 UTC m=+2028.619315621" observedRunningTime="2026-01-21 15:59:57.399946782 +0000 UTC m=+2029.090653046" watchObservedRunningTime="2026-01-21 15:59:57.405354509 +0000 UTC m=+2029.096060773" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.171933 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr"] Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.173456 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.178100 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.178363 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.184513 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr"] Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.341401 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8f24\" (UniqueName: \"kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.341529 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.341580 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.443063 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8f24\" (UniqueName: \"kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.443248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.443355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.444940 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.449494 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.462026 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8f24\" (UniqueName: \"kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.506499 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.967159 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr"] Jan 21 16:00:02 crc kubenswrapper[4739]: I0121 16:00:02.037908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" event={"ID":"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc","Type":"ContainerStarted","Data":"dc8a977ecd7f7e2be7f9b5d42a5f6836ba0de9cb20feea63ae4da3d14c5dcf0a"} Jan 21 16:00:02 crc kubenswrapper[4739]: I0121 16:00:02.038395 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" event={"ID":"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc","Type":"ContainerStarted","Data":"2f2e382bbfaf56a09ed01217d419c65c7f5e724c9c6d6b12f62e17547d0adfd5"} Jan 21 16:00:02 crc kubenswrapper[4739]: I0121 16:00:02.057706 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" podStartSLOduration=2.05768585 podStartE2EDuration="2.05768585s" podCreationTimestamp="2026-01-21 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:00:02.053939959 +0000 UTC m=+2033.744646223" watchObservedRunningTime="2026-01-21 16:00:02.05768585 +0000 UTC m=+2033.748392114" Jan 21 16:00:03 crc kubenswrapper[4739]: I0121 16:00:03.047874 4739 generic.go:334] "Generic (PLEG): container finished" podID="0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" containerID="dc8a977ecd7f7e2be7f9b5d42a5f6836ba0de9cb20feea63ae4da3d14c5dcf0a" exitCode=0 Jan 21 16:00:03 crc kubenswrapper[4739]: I0121 16:00:03.047925 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" event={"ID":"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc","Type":"ContainerDied","Data":"dc8a977ecd7f7e2be7f9b5d42a5f6836ba0de9cb20feea63ae4da3d14c5dcf0a"} Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.418920 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.456116 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume\") pod \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.456188 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume\") pod \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.456299 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8f24\" (UniqueName: \"kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24\") pod \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.456876 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" (UID: "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.462770 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" (UID: "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.464381 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24" (OuterVolumeSpecName: "kube-api-access-g8f24") pod "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" (UID: "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc"). InnerVolumeSpecName "kube-api-access-g8f24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.557457 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.557520 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.557535 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8f24\" (UniqueName: \"kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.064692 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" event={"ID":"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc","Type":"ContainerDied","Data":"2f2e382bbfaf56a09ed01217d419c65c7f5e724c9c6d6b12f62e17547d0adfd5"} Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.064741 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2e382bbfaf56a09ed01217d419c65c7f5e724c9c6d6b12f62e17547d0adfd5" Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.064776 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.148767 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw"] Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.157748 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw"] Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.222537 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.222595 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:00:06 crc kubenswrapper[4739]: I0121 16:00:06.794700 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aac4099-92f1-43a7-96e1-50d45566cf54" path="/var/lib/kubelet/pods/1aac4099-92f1-43a7-96e1-50d45566cf54/volumes" Jan 21 16:00:20 crc kubenswrapper[4739]: I0121 16:00:20.035432 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7jt2b"] Jan 21 16:00:20 crc kubenswrapper[4739]: I0121 16:00:20.046722 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7jt2b"] Jan 21 16:00:20 crc kubenswrapper[4739]: I0121 16:00:20.792820 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee6ce08-4c84-436e-bf6c-78edfd72079e" path="/var/lib/kubelet/pods/bee6ce08-4c84-436e-bf6c-78edfd72079e/volumes" Jan 21 16:00:26 crc kubenswrapper[4739]: I0121 16:00:26.047770 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ps2tj"] Jan 21 16:00:26 crc kubenswrapper[4739]: I0121 16:00:26.062409 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ps2tj"] Jan 21 16:00:26 crc kubenswrapper[4739]: I0121 16:00:26.792549 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5fdc51e-5890-4f55-8693-275865a73e2a" path="/var/lib/kubelet/pods/a5fdc51e-5890-4f55-8693-275865a73e2a/volumes" Jan 21 16:00:35 crc kubenswrapper[4739]: I0121 16:00:35.222850 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:00:35 crc kubenswrapper[4739]: I0121 16:00:35.223539 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:00:48 crc kubenswrapper[4739]: I0121 16:00:48.687676 4739 scope.go:117] "RemoveContainer" containerID="64ae28312ee2b4216d7fbd5bbdda04698ad326561300c21ef589ce642e1cd225" Jan 21 16:00:48 crc kubenswrapper[4739]: I0121 16:00:48.773337 4739 scope.go:117] "RemoveContainer" containerID="4798236393baf528c0c4993b5af62d7ba7d89ae6096c4966bb99e447397af0a0" Jan 21 16:00:48 crc kubenswrapper[4739]: I0121 16:00:48.823025 4739 scope.go:117] "RemoveContainer" containerID="5b8179165447cef12f007a52d92471b3add91f61832db6a1bec046d4bb82e28b" Jan 21 16:00:48 crc kubenswrapper[4739]: I0121 16:00:48.866784 4739 scope.go:117] "RemoveContainer" containerID="5ad4bb35d6311c3aa3bed4bc5cef61cbb9fb6fa0ae39cdf622663c4df942e514" Jan 21 16:00:55 crc kubenswrapper[4739]: I0121 16:00:55.484163 4739 generic.go:334] "Generic (PLEG): container finished" podID="71e02623-c543-47f0-8acc-cbf7a605ed34" containerID="f815cbd4af2807d57aa7a3d16da322283c828b8a3f9071e839088ef748d47627" exitCode=0 Jan 21 16:00:55 crc kubenswrapper[4739]: I0121 16:00:55.484247 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" event={"ID":"71e02623-c543-47f0-8acc-cbf7a605ed34","Type":"ContainerDied","Data":"f815cbd4af2807d57aa7a3d16da322283c828b8a3f9071e839088ef748d47627"} Jan 21 16:00:56 crc kubenswrapper[4739]: I0121 16:00:56.947841 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.087155 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam\") pod \"71e02623-c543-47f0-8acc-cbf7a605ed34\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.087492 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory\") pod \"71e02623-c543-47f0-8acc-cbf7a605ed34\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.087909 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-754fx\" (UniqueName: \"kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx\") pod \"71e02623-c543-47f0-8acc-cbf7a605ed34\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.094788 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx" (OuterVolumeSpecName: "kube-api-access-754fx") pod "71e02623-c543-47f0-8acc-cbf7a605ed34" (UID: "71e02623-c543-47f0-8acc-cbf7a605ed34"). InnerVolumeSpecName "kube-api-access-754fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.117092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory" (OuterVolumeSpecName: "inventory") pod "71e02623-c543-47f0-8acc-cbf7a605ed34" (UID: "71e02623-c543-47f0-8acc-cbf7a605ed34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.118923 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71e02623-c543-47f0-8acc-cbf7a605ed34" (UID: "71e02623-c543-47f0-8acc-cbf7a605ed34"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.191080 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-754fx\" (UniqueName: \"kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.191482 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.191506 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.502672 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" event={"ID":"71e02623-c543-47f0-8acc-cbf7a605ed34","Type":"ContainerDied","Data":"ab1d33c40e007cf9bb92442625334c8351ea86da0978e0055181b67fca07644d"} Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.502712 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.502747 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab1d33c40e007cf9bb92442625334c8351ea86da0978e0055181b67fca07644d" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.603725 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4q85"] Jan 21 16:00:57 crc kubenswrapper[4739]: E0121 16:00:57.604107 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e02623-c543-47f0-8acc-cbf7a605ed34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.604124 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e02623-c543-47f0-8acc-cbf7a605ed34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:00:57 crc kubenswrapper[4739]: E0121 16:00:57.604158 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" containerName="collect-profiles" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.604163 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" containerName="collect-profiles" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.604311 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" containerName="collect-profiles" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.604328 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e02623-c543-47f0-8acc-cbf7a605ed34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.604910 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.608911 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.609094 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.609167 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.610147 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.620745 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4q85"] Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.700457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfmk\" (UniqueName: \"kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.700679 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.700721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.803111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kfmk\" (UniqueName: \"kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.803259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.803309 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.810503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.810505 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.828106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kfmk\" (UniqueName: \"kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.928273 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:58 crc kubenswrapper[4739]: I0121 16:00:58.275278 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4q85"] Jan 21 16:00:58 crc kubenswrapper[4739]: I0121 16:00:58.511079 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" event={"ID":"437db458-4fe0-4cf6-b23f-895ff57c27c0","Type":"ContainerStarted","Data":"c20b4fdea6499d4f7571b2a87bbf0d8a6ec62c420e4c3567cf8dcb1cc4fef138"} Jan 21 16:00:59 crc kubenswrapper[4739]: I0121 16:00:59.518947 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" event={"ID":"437db458-4fe0-4cf6-b23f-895ff57c27c0","Type":"ContainerStarted","Data":"79dbeb2b6724e69669f51dec6142579531989356f8c20f251cceb9256942fad5"} Jan 21 16:00:59 crc kubenswrapper[4739]: I0121 16:00:59.539352 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" podStartSLOduration=1.730931713 podStartE2EDuration="2.539331802s" podCreationTimestamp="2026-01-21 16:00:57 +0000 UTC" firstStartedPulling="2026-01-21 16:00:58.275009445 +0000 UTC m=+2089.965715709" lastFinishedPulling="2026-01-21 16:00:59.083409544 +0000 UTC m=+2090.774115798" observedRunningTime="2026-01-21 16:00:59.536418663 +0000 UTC m=+2091.227124937" watchObservedRunningTime="2026-01-21 16:00:59.539331802 +0000 UTC m=+2091.230038066" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.171968 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483521-cztpq"] Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.173616 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.181237 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483521-cztpq"] Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.250332 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.250449 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtlc\" (UniqueName: \"kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.250503 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.250584 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.352642 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.352788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.352989 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.353112 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rtlc\" (UniqueName: \"kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.360254 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.360730 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.369753 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.375752 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rtlc\" (UniqueName: \"kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.499795 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:01 crc kubenswrapper[4739]: I0121 16:01:01.002873 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483521-cztpq"] Jan 21 16:01:01 crc kubenswrapper[4739]: I0121 16:01:01.553088 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-cztpq" event={"ID":"dc21193f-dbfb-4e0d-87d6-48f184c466ef","Type":"ContainerStarted","Data":"dfd08d58c316dd13c7cc43eb06b7875943bc340cdfd7b2b32693a1e4563271ce"} Jan 21 16:01:02 crc kubenswrapper[4739]: I0121 16:01:02.562807 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-cztpq" event={"ID":"dc21193f-dbfb-4e0d-87d6-48f184c466ef","Type":"ContainerStarted","Data":"a00931dab8ecae925ae2f7c3f2dc33190f0582079e3eb9a25977f13b6be756b6"} Jan 21 16:01:02 crc kubenswrapper[4739]: I0121 16:01:02.583994 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483521-cztpq" podStartSLOduration=2.583970529 podStartE2EDuration="2.583970529s" podCreationTimestamp="2026-01-21 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:01:02.581469631 +0000 UTC m=+2094.272175895" watchObservedRunningTime="2026-01-21 16:01:02.583970529 +0000 UTC m=+2094.274676803" Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.082881 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lksxc"] Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.093127 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lksxc"] Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.222978 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.223432 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.223531 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.224313 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.224430 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f" gracePeriod=600 Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.597634 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f" exitCode=0 Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.597718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f"} Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.597754 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 16:01:06 crc kubenswrapper[4739]: I0121 16:01:06.607489 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce"} Jan 21 16:01:06 crc kubenswrapper[4739]: I0121 16:01:06.610261 4739 generic.go:334] "Generic (PLEG): container finished" podID="dc21193f-dbfb-4e0d-87d6-48f184c466ef" containerID="a00931dab8ecae925ae2f7c3f2dc33190f0582079e3eb9a25977f13b6be756b6" exitCode=0 Jan 21 16:01:06 crc kubenswrapper[4739]: I0121 16:01:06.610299 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-cztpq" event={"ID":"dc21193f-dbfb-4e0d-87d6-48f184c466ef","Type":"ContainerDied","Data":"a00931dab8ecae925ae2f7c3f2dc33190f0582079e3eb9a25977f13b6be756b6"} Jan 21 16:01:06 crc kubenswrapper[4739]: I0121 16:01:06.793234 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e757d911-c2e0-4498-8b03-1b83fedc6e0e" path="/var/lib/kubelet/pods/e757d911-c2e0-4498-8b03-1b83fedc6e0e/volumes" Jan 21 16:01:07 crc kubenswrapper[4739]: I0121 16:01:07.618982 4739 generic.go:334] "Generic (PLEG): container finished" podID="437db458-4fe0-4cf6-b23f-895ff57c27c0" containerID="79dbeb2b6724e69669f51dec6142579531989356f8c20f251cceb9256942fad5" exitCode=0 Jan 21 16:01:07 crc kubenswrapper[4739]: I0121 16:01:07.619160 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" event={"ID":"437db458-4fe0-4cf6-b23f-895ff57c27c0","Type":"ContainerDied","Data":"79dbeb2b6724e69669f51dec6142579531989356f8c20f251cceb9256942fad5"} Jan 21 16:01:07 crc kubenswrapper[4739]: I0121 16:01:07.948365 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.005784 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rtlc\" (UniqueName: \"kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc\") pod \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.005961 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data\") pod \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.006060 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle\") pod \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.006090 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys\") pod \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.012749 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc" (OuterVolumeSpecName: "kube-api-access-6rtlc") pod "dc21193f-dbfb-4e0d-87d6-48f184c466ef" (UID: "dc21193f-dbfb-4e0d-87d6-48f184c466ef"). InnerVolumeSpecName "kube-api-access-6rtlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.026666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dc21193f-dbfb-4e0d-87d6-48f184c466ef" (UID: "dc21193f-dbfb-4e0d-87d6-48f184c466ef"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.042734 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc21193f-dbfb-4e0d-87d6-48f184c466ef" (UID: "dc21193f-dbfb-4e0d-87d6-48f184c466ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.057519 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data" (OuterVolumeSpecName: "config-data") pod "dc21193f-dbfb-4e0d-87d6-48f184c466ef" (UID: "dc21193f-dbfb-4e0d-87d6-48f184c466ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.107663 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rtlc\" (UniqueName: \"kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.107722 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.107732 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.107741 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.627850 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.627852 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-cztpq" event={"ID":"dc21193f-dbfb-4e0d-87d6-48f184c466ef","Type":"ContainerDied","Data":"dfd08d58c316dd13c7cc43eb06b7875943bc340cdfd7b2b32693a1e4563271ce"} Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.627987 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd08d58c316dd13c7cc43eb06b7875943bc340cdfd7b2b32693a1e4563271ce" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.050637 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.135552 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0\") pod \"437db458-4fe0-4cf6-b23f-895ff57c27c0\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.135944 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam\") pod \"437db458-4fe0-4cf6-b23f-895ff57c27c0\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.136137 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kfmk\" (UniqueName: \"kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk\") pod \"437db458-4fe0-4cf6-b23f-895ff57c27c0\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.144571 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk" (OuterVolumeSpecName: "kube-api-access-6kfmk") pod "437db458-4fe0-4cf6-b23f-895ff57c27c0" (UID: "437db458-4fe0-4cf6-b23f-895ff57c27c0"). InnerVolumeSpecName "kube-api-access-6kfmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.161009 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "437db458-4fe0-4cf6-b23f-895ff57c27c0" (UID: "437db458-4fe0-4cf6-b23f-895ff57c27c0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.162603 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "437db458-4fe0-4cf6-b23f-895ff57c27c0" (UID: "437db458-4fe0-4cf6-b23f-895ff57c27c0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.238834 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kfmk\" (UniqueName: \"kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.238866 4739 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.238875 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.640157 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" event={"ID":"437db458-4fe0-4cf6-b23f-895ff57c27c0","Type":"ContainerDied","Data":"c20b4fdea6499d4f7571b2a87bbf0d8a6ec62c420e4c3567cf8dcb1cc4fef138"} Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.640200 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c20b4fdea6499d4f7571b2a87bbf0d8a6ec62c420e4c3567cf8dcb1cc4fef138" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.640198 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.728674 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2"] Jan 21 16:01:09 crc kubenswrapper[4739]: E0121 16:01:09.729031 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437db458-4fe0-4cf6-b23f-895ff57c27c0" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.729049 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="437db458-4fe0-4cf6-b23f-895ff57c27c0" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:01:09 crc kubenswrapper[4739]: E0121 16:01:09.729071 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc21193f-dbfb-4e0d-87d6-48f184c466ef" containerName="keystone-cron" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.729078 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc21193f-dbfb-4e0d-87d6-48f184c466ef" containerName="keystone-cron" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.729234 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc21193f-dbfb-4e0d-87d6-48f184c466ef" containerName="keystone-cron" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.729247 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="437db458-4fe0-4cf6-b23f-895ff57c27c0" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.729770 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.740054 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.740135 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.740171 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.740352 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.747013 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.747058 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.747087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k59k\" (UniqueName: \"kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.748948 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2"] Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.849090 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.849146 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.849190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k59k\" (UniqueName: \"kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.853599 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.866091 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.869935 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k59k\" (UniqueName: \"kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:10 crc kubenswrapper[4739]: I0121 16:01:10.049989 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:10 crc kubenswrapper[4739]: I0121 16:01:10.566322 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2"] Jan 21 16:01:10 crc kubenswrapper[4739]: I0121 16:01:10.649142 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" event={"ID":"f07d5149-f4ed-41ce-9e12-9052a2a4772e","Type":"ContainerStarted","Data":"316bff2dfc9f2d7f31116a1013caf4c05cdb8a86dd41536dfbb083f4e5fb1e41"} Jan 21 16:01:12 crc kubenswrapper[4739]: I0121 16:01:12.676657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" event={"ID":"f07d5149-f4ed-41ce-9e12-9052a2a4772e","Type":"ContainerStarted","Data":"f0bd777751e0cff4c69c0381a3a0ccff61702e8529245cab1d0ce1229ec7fa73"} Jan 21 16:01:12 crc kubenswrapper[4739]: I0121 16:01:12.710913 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" podStartSLOduration=2.581070416 podStartE2EDuration="3.710895098s" podCreationTimestamp="2026-01-21 16:01:09 +0000 UTC" firstStartedPulling="2026-01-21 16:01:10.572520282 +0000 UTC m=+2102.263226546" lastFinishedPulling="2026-01-21 16:01:11.702344964 +0000 UTC m=+2103.393051228" observedRunningTime="2026-01-21 16:01:12.704069103 +0000 UTC m=+2104.394775387" watchObservedRunningTime="2026-01-21 16:01:12.710895098 +0000 UTC m=+2104.401601362" Jan 21 16:01:21 crc kubenswrapper[4739]: I0121 16:01:21.745531 4739 generic.go:334] "Generic (PLEG): container finished" podID="f07d5149-f4ed-41ce-9e12-9052a2a4772e" containerID="f0bd777751e0cff4c69c0381a3a0ccff61702e8529245cab1d0ce1229ec7fa73" exitCode=0 Jan 21 16:01:21 crc kubenswrapper[4739]: I0121 16:01:21.745614 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" event={"ID":"f07d5149-f4ed-41ce-9e12-9052a2a4772e","Type":"ContainerDied","Data":"f0bd777751e0cff4c69c0381a3a0ccff61702e8529245cab1d0ce1229ec7fa73"} Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.183847 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.307251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory\") pod \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.307367 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam\") pod \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.307390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k59k\" (UniqueName: \"kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k\") pod \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.312682 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k" (OuterVolumeSpecName: "kube-api-access-7k59k") pod "f07d5149-f4ed-41ce-9e12-9052a2a4772e" (UID: "f07d5149-f4ed-41ce-9e12-9052a2a4772e"). InnerVolumeSpecName "kube-api-access-7k59k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.336157 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory" (OuterVolumeSpecName: "inventory") pod "f07d5149-f4ed-41ce-9e12-9052a2a4772e" (UID: "f07d5149-f4ed-41ce-9e12-9052a2a4772e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.336955 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f07d5149-f4ed-41ce-9e12-9052a2a4772e" (UID: "f07d5149-f4ed-41ce-9e12-9052a2a4772e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.409507 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.409545 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.409558 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k59k\" (UniqueName: \"kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.763808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" event={"ID":"f07d5149-f4ed-41ce-9e12-9052a2a4772e","Type":"ContainerDied","Data":"316bff2dfc9f2d7f31116a1013caf4c05cdb8a86dd41536dfbb083f4e5fb1e41"} Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.763876 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316bff2dfc9f2d7f31116a1013caf4c05cdb8a86dd41536dfbb083f4e5fb1e41" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.763883 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.831697 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n"] Jan 21 16:01:23 crc kubenswrapper[4739]: E0121 16:01:23.832133 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07d5149-f4ed-41ce-9e12-9052a2a4772e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.832158 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07d5149-f4ed-41ce-9e12-9052a2a4772e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.832362 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07d5149-f4ed-41ce-9e12-9052a2a4772e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.833142 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.835894 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.840051 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.840100 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.840322 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.855052 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n"] Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.929912 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzlbl\" (UniqueName: \"kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.930354 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.930422 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.031848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.031921 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.032024 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzlbl\" (UniqueName: \"kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.036517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.036731 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.047024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzlbl\" (UniqueName: \"kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.152159 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.729746 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n"] Jan 21 16:01:24 crc kubenswrapper[4739]: W0121 16:01:24.732510 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd96e63b4_1388_49c6_a472_98bd5b480606.slice/crio-419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6 WatchSource:0}: Error finding container 419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6: Status 404 returned error can't find the container with id 419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6 Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.772865 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" event={"ID":"d96e63b4-1388-49c6-a472-98bd5b480606","Type":"ContainerStarted","Data":"419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6"} Jan 21 16:01:26 crc kubenswrapper[4739]: I0121 16:01:26.797810 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" event={"ID":"d96e63b4-1388-49c6-a472-98bd5b480606","Type":"ContainerStarted","Data":"13fef75ea95e51a1e876744f4cefce933c332610f033256bc38c5cbe442cbdc8"} Jan 21 16:01:26 crc kubenswrapper[4739]: I0121 16:01:26.814523 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" podStartSLOduration=2.185728562 podStartE2EDuration="3.814501037s" podCreationTimestamp="2026-01-21 16:01:23 +0000 UTC" firstStartedPulling="2026-01-21 16:01:24.734521453 +0000 UTC m=+2116.425227717" lastFinishedPulling="2026-01-21 16:01:26.363293928 +0000 UTC m=+2118.054000192" observedRunningTime="2026-01-21 16:01:26.811900826 +0000 UTC m=+2118.502607110" watchObservedRunningTime="2026-01-21 16:01:26.814501037 +0000 UTC m=+2118.505207301" Jan 21 16:01:37 crc kubenswrapper[4739]: I0121 16:01:37.893800 4739 generic.go:334] "Generic (PLEG): container finished" podID="d96e63b4-1388-49c6-a472-98bd5b480606" containerID="13fef75ea95e51a1e876744f4cefce933c332610f033256bc38c5cbe442cbdc8" exitCode=0 Jan 21 16:01:37 crc kubenswrapper[4739]: I0121 16:01:37.893867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" event={"ID":"d96e63b4-1388-49c6-a472-98bd5b480606","Type":"ContainerDied","Data":"13fef75ea95e51a1e876744f4cefce933c332610f033256bc38c5cbe442cbdc8"} Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.364952 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.552110 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory\") pod \"d96e63b4-1388-49c6-a472-98bd5b480606\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.552233 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam\") pod \"d96e63b4-1388-49c6-a472-98bd5b480606\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.552290 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzlbl\" (UniqueName: \"kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl\") pod \"d96e63b4-1388-49c6-a472-98bd5b480606\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.560083 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl" (OuterVolumeSpecName: "kube-api-access-pzlbl") pod "d96e63b4-1388-49c6-a472-98bd5b480606" (UID: "d96e63b4-1388-49c6-a472-98bd5b480606"). InnerVolumeSpecName "kube-api-access-pzlbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.576393 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory" (OuterVolumeSpecName: "inventory") pod "d96e63b4-1388-49c6-a472-98bd5b480606" (UID: "d96e63b4-1388-49c6-a472-98bd5b480606"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.583024 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d96e63b4-1388-49c6-a472-98bd5b480606" (UID: "d96e63b4-1388-49c6-a472-98bd5b480606"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.654241 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.654274 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.654286 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzlbl\" (UniqueName: \"kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.915064 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" event={"ID":"d96e63b4-1388-49c6-a472-98bd5b480606","Type":"ContainerDied","Data":"419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6"} Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.915120 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.915184 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:48 crc kubenswrapper[4739]: I0121 16:01:48.985173 4739 scope.go:117] "RemoveContainer" containerID="34b39bd33860779b21d637b619f3beb93e3a5f4f2934c1f0596cd6fd4968a14a" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.480767 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:15 crc kubenswrapper[4739]: E0121 16:03:15.481760 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96e63b4-1388-49c6-a472-98bd5b480606" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.481779 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96e63b4-1388-49c6-a472-98bd5b480606" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.482024 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96e63b4-1388-49c6-a472-98bd5b480606" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.483475 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.496773 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.574963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm59t\" (UniqueName: \"kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.575508 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.575646 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.677574 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.677674 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.677707 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm59t\" (UniqueName: \"kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.678336 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.678398 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.711061 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm59t\" (UniqueName: \"kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.806318 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:16 crc kubenswrapper[4739]: I0121 16:03:16.317714 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:16 crc kubenswrapper[4739]: I0121 16:03:16.727215 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerStarted","Data":"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b"} Jan 21 16:03:16 crc kubenswrapper[4739]: I0121 16:03:16.727547 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerStarted","Data":"b2f3b2a1d4c94e5b14b2a4292d0ca130a7253e26f772fee0e3087badf6f151d5"} Jan 21 16:03:17 crc kubenswrapper[4739]: I0121 16:03:17.737573 4739 generic.go:334] "Generic (PLEG): container finished" podID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerID="aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b" exitCode=0 Jan 21 16:03:17 crc kubenswrapper[4739]: I0121 16:03:17.737629 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerDied","Data":"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b"} Jan 21 16:03:18 crc kubenswrapper[4739]: I0121 16:03:18.752780 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerStarted","Data":"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe"} Jan 21 16:03:19 crc kubenswrapper[4739]: I0121 16:03:19.763057 4739 generic.go:334] "Generic (PLEG): container finished" podID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerID="79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe" exitCode=0 Jan 21 16:03:19 crc kubenswrapper[4739]: I0121 16:03:19.763109 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerDied","Data":"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe"} Jan 21 16:03:20 crc kubenswrapper[4739]: I0121 16:03:20.774093 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerStarted","Data":"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7"} Jan 21 16:03:20 crc kubenswrapper[4739]: I0121 16:03:20.804443 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h5dgr" podStartSLOduration=3.087392023 podStartE2EDuration="5.804425649s" podCreationTimestamp="2026-01-21 16:03:15 +0000 UTC" firstStartedPulling="2026-01-21 16:03:17.740445961 +0000 UTC m=+2229.431152225" lastFinishedPulling="2026-01-21 16:03:20.457479577 +0000 UTC m=+2232.148185851" observedRunningTime="2026-01-21 16:03:20.795131096 +0000 UTC m=+2232.485837380" watchObservedRunningTime="2026-01-21 16:03:20.804425649 +0000 UTC m=+2232.495131913" Jan 21 16:03:25 crc kubenswrapper[4739]: I0121 16:03:25.807547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:25 crc kubenswrapper[4739]: I0121 16:03:25.808209 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:25 crc kubenswrapper[4739]: I0121 16:03:25.892507 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:25 crc kubenswrapper[4739]: I0121 16:03:25.989338 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:26 crc kubenswrapper[4739]: I0121 16:03:26.150356 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:27 crc kubenswrapper[4739]: I0121 16:03:27.850609 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h5dgr" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="registry-server" containerID="cri-o://425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7" gracePeriod=2 Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.304332 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.419335 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities\") pod \"1f3919ab-0302-4408-8d85-c1e3158465d9\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.419444 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm59t\" (UniqueName: \"kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t\") pod \"1f3919ab-0302-4408-8d85-c1e3158465d9\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.419545 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content\") pod \"1f3919ab-0302-4408-8d85-c1e3158465d9\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.420554 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities" (OuterVolumeSpecName: "utilities") pod "1f3919ab-0302-4408-8d85-c1e3158465d9" (UID: "1f3919ab-0302-4408-8d85-c1e3158465d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.426549 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t" (OuterVolumeSpecName: "kube-api-access-fm59t") pod "1f3919ab-0302-4408-8d85-c1e3158465d9" (UID: "1f3919ab-0302-4408-8d85-c1e3158465d9"). InnerVolumeSpecName "kube-api-access-fm59t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.473154 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f3919ab-0302-4408-8d85-c1e3158465d9" (UID: "1f3919ab-0302-4408-8d85-c1e3158465d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.522019 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.522062 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm59t\" (UniqueName: \"kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.522076 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.861785 4739 generic.go:334] "Generic (PLEG): container finished" podID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerID="425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7" exitCode=0 Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.861855 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.861857 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerDied","Data":"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7"} Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.861914 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerDied","Data":"b2f3b2a1d4c94e5b14b2a4292d0ca130a7253e26f772fee0e3087badf6f151d5"} Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.861935 4739 scope.go:117] "RemoveContainer" containerID="425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.891191 4739 scope.go:117] "RemoveContainer" containerID="79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.891208 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.908058 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.912110 4739 scope.go:117] "RemoveContainer" containerID="aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.953321 4739 scope.go:117] "RemoveContainer" containerID="425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7" Jan 21 16:03:28 crc kubenswrapper[4739]: E0121 16:03:28.953682 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7\": container with ID starting with 425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7 not found: ID does not exist" containerID="425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.953727 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7"} err="failed to get container status \"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7\": rpc error: code = NotFound desc = could not find container \"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7\": container with ID starting with 425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7 not found: ID does not exist" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.953755 4739 scope.go:117] "RemoveContainer" containerID="79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe" Jan 21 16:03:28 crc kubenswrapper[4739]: E0121 16:03:28.954031 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe\": container with ID starting with 79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe not found: ID does not exist" containerID="79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.954059 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe"} err="failed to get container status \"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe\": rpc error: code = NotFound desc = could not find container \"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe\": container with ID starting with 79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe not found: ID does not exist" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.954072 4739 scope.go:117] "RemoveContainer" containerID="aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b" Jan 21 16:03:28 crc kubenswrapper[4739]: E0121 16:03:28.954441 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b\": container with ID starting with aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b not found: ID does not exist" containerID="aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.954464 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b"} err="failed to get container status \"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b\": rpc error: code = NotFound desc = could not find container \"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b\": container with ID starting with aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b not found: ID does not exist" Jan 21 16:03:30 crc kubenswrapper[4739]: I0121 16:03:30.795043 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" path="/var/lib/kubelet/pods/1f3919ab-0302-4408-8d85-c1e3158465d9/volumes" Jan 21 16:03:35 crc kubenswrapper[4739]: I0121 16:03:35.222591 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:03:35 crc kubenswrapper[4739]: I0121 16:03:35.223129 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:04:05 crc kubenswrapper[4739]: I0121 16:04:05.223158 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:04:05 crc kubenswrapper[4739]: I0121 16:04:05.223741 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.321220 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:30 crc kubenswrapper[4739]: E0121 16:04:30.322124 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="extract-utilities" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.322138 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="extract-utilities" Jan 21 16:04:30 crc kubenswrapper[4739]: E0121 16:04:30.322158 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="extract-content" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.322164 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="extract-content" Jan 21 16:04:30 crc kubenswrapper[4739]: E0121 16:04:30.322187 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="registry-server" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.322198 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="registry-server" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.322380 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="registry-server" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.323541 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.347553 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.466958 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.467082 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.467111 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nklh\" (UniqueName: \"kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.577903 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.578024 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.578062 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nklh\" (UniqueName: \"kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.579246 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.579489 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.627871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nklh\" (UniqueName: \"kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.644575 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:31 crc kubenswrapper[4739]: I0121 16:04:31.384106 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:31 crc kubenswrapper[4739]: I0121 16:04:31.483361 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerStarted","Data":"5a79b2eb72d5c2ac009396664aaab1b97a8df8b31b33e94c2f5ad57244c72ea0"} Jan 21 16:04:32 crc kubenswrapper[4739]: I0121 16:04:32.493186 4739 generic.go:334] "Generic (PLEG): container finished" podID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerID="9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0" exitCode=0 Jan 21 16:04:32 crc kubenswrapper[4739]: I0121 16:04:32.493354 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerDied","Data":"9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0"} Jan 21 16:04:32 crc kubenswrapper[4739]: I0121 16:04:32.495946 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:04:33 crc kubenswrapper[4739]: I0121 16:04:33.505384 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerStarted","Data":"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f"} Jan 21 16:04:34 crc kubenswrapper[4739]: I0121 16:04:34.529269 4739 generic.go:334] "Generic (PLEG): container finished" podID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerID="c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f" exitCode=0 Jan 21 16:04:34 crc kubenswrapper[4739]: I0121 16:04:34.529323 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerDied","Data":"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f"} Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.222961 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.223566 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.223611 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.224403 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.224473 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" gracePeriod=600 Jan 21 16:04:35 crc kubenswrapper[4739]: E0121 16:04:35.353620 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.542286 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerStarted","Data":"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62"} Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.545546 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" exitCode=0 Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.545604 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce"} Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.545644 4739 scope.go:117] "RemoveContainer" containerID="780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.546157 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:04:35 crc kubenswrapper[4739]: E0121 16:04:35.546442 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.571306 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-stklf" podStartSLOduration=3.127916356 podStartE2EDuration="5.571288335s" podCreationTimestamp="2026-01-21 16:04:30 +0000 UTC" firstStartedPulling="2026-01-21 16:04:32.49558052 +0000 UTC m=+2304.186286794" lastFinishedPulling="2026-01-21 16:04:34.938952509 +0000 UTC m=+2306.629658773" observedRunningTime="2026-01-21 16:04:35.5688674 +0000 UTC m=+2307.259573664" watchObservedRunningTime="2026-01-21 16:04:35.571288335 +0000 UTC m=+2307.261994599" Jan 21 16:04:40 crc kubenswrapper[4739]: I0121 16:04:40.645903 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:40 crc kubenswrapper[4739]: I0121 16:04:40.646488 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:40 crc kubenswrapper[4739]: I0121 16:04:40.693983 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:41 crc kubenswrapper[4739]: I0121 16:04:41.652114 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:41 crc kubenswrapper[4739]: I0121 16:04:41.702952 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:43 crc kubenswrapper[4739]: I0121 16:04:43.638144 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-stklf" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="registry-server" containerID="cri-o://f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62" gracePeriod=2 Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.592242 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.652013 4739 generic.go:334] "Generic (PLEG): container finished" podID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerID="f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62" exitCode=0 Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.652068 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerDied","Data":"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62"} Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.652101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerDied","Data":"5a79b2eb72d5c2ac009396664aaab1b97a8df8b31b33e94c2f5ad57244c72ea0"} Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.652146 4739 scope.go:117] "RemoveContainer" containerID="f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.652259 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.677858 4739 scope.go:117] "RemoveContainer" containerID="c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.700725 4739 scope.go:117] "RemoveContainer" containerID="9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.738457 4739 scope.go:117] "RemoveContainer" containerID="f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62" Jan 21 16:04:44 crc kubenswrapper[4739]: E0121 16:04:44.739548 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62\": container with ID starting with f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62 not found: ID does not exist" containerID="f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.739622 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62"} err="failed to get container status \"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62\": rpc error: code = NotFound desc = could not find container \"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62\": container with ID starting with f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62 not found: ID does not exist" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.739657 4739 scope.go:117] "RemoveContainer" containerID="c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f" Jan 21 16:04:44 crc kubenswrapper[4739]: E0121 16:04:44.741344 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f\": container with ID starting with c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f not found: ID does not exist" containerID="c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.741378 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f"} err="failed to get container status \"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f\": rpc error: code = NotFound desc = could not find container \"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f\": container with ID starting with c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f not found: ID does not exist" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.741400 4739 scope.go:117] "RemoveContainer" containerID="9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0" Jan 21 16:04:44 crc kubenswrapper[4739]: E0121 16:04:44.742364 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0\": container with ID starting with 9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0 not found: ID does not exist" containerID="9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.742467 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0"} err="failed to get container status \"9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0\": rpc error: code = NotFound desc = could not find container \"9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0\": container with ID starting with 9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0 not found: ID does not exist" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.762306 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content\") pod \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.762515 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities\") pod \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.762709 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nklh\" (UniqueName: \"kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh\") pod \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.763338 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities" (OuterVolumeSpecName: "utilities") pod "beda0f35-bfcb-4881-a88e-b6f1c4e32de9" (UID: "beda0f35-bfcb-4881-a88e-b6f1c4e32de9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.770123 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh" (OuterVolumeSpecName: "kube-api-access-5nklh") pod "beda0f35-bfcb-4881-a88e-b6f1c4e32de9" (UID: "beda0f35-bfcb-4881-a88e-b6f1c4e32de9"). InnerVolumeSpecName "kube-api-access-5nklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.819945 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "beda0f35-bfcb-4881-a88e-b6f1c4e32de9" (UID: "beda0f35-bfcb-4881-a88e-b6f1c4e32de9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.864537 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nklh\" (UniqueName: \"kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.864585 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.864595 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.993711 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:45 crc kubenswrapper[4739]: I0121 16:04:45.004063 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:46 crc kubenswrapper[4739]: I0121 16:04:46.783908 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:04:46 crc kubenswrapper[4739]: E0121 16:04:46.784437 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:04:46 crc kubenswrapper[4739]: I0121 16:04:46.797382 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" path="/var/lib/kubelet/pods/beda0f35-bfcb-4881-a88e-b6f1c4e32de9/volumes" Jan 21 16:05:01 crc kubenswrapper[4739]: I0121 16:05:01.957316 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:05:01 crc kubenswrapper[4739]: E0121 16:05:01.964920 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:05:15 crc kubenswrapper[4739]: I0121 16:05:15.783572 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:05:15 crc kubenswrapper[4739]: E0121 16:05:15.784507 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:05:27 crc kubenswrapper[4739]: I0121 16:05:27.782981 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:05:27 crc kubenswrapper[4739]: E0121 16:05:27.783727 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:05:40 crc kubenswrapper[4739]: I0121 16:05:40.784176 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:05:40 crc kubenswrapper[4739]: E0121 16:05:40.784995 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:05:52 crc kubenswrapper[4739]: I0121 16:05:52.783728 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:05:52 crc kubenswrapper[4739]: E0121 16:05:52.784710 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:06:05 crc kubenswrapper[4739]: I0121 16:06:05.783579 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:06:05 crc kubenswrapper[4739]: E0121 16:06:05.784493 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:06:17 crc kubenswrapper[4739]: I0121 16:06:17.783292 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:06:17 crc kubenswrapper[4739]: E0121 16:06:17.784123 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:06:28 crc kubenswrapper[4739]: I0121 16:06:28.792297 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:06:28 crc kubenswrapper[4739]: E0121 16:06:28.793151 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:06:42 crc kubenswrapper[4739]: I0121 16:06:42.784036 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:06:42 crc kubenswrapper[4739]: E0121 16:06:42.784918 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:06:54 crc kubenswrapper[4739]: I0121 16:06:54.782923 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:06:54 crc kubenswrapper[4739]: E0121 16:06:54.784830 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:07:09 crc kubenswrapper[4739]: I0121 16:07:09.782532 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:07:09 crc kubenswrapper[4739]: E0121 16:07:09.783224 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:07:24 crc kubenswrapper[4739]: I0121 16:07:24.783309 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:07:24 crc kubenswrapper[4739]: E0121 16:07:24.784106 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:07:38 crc kubenswrapper[4739]: I0121 16:07:38.789177 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:07:38 crc kubenswrapper[4739]: E0121 16:07:38.789907 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.578168 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.603337 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.627888 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4q85"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.646887 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.663669 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.680883 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.688173 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.705886 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.709885 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.726722 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.741890 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4q85"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.749889 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.758146 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.765880 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.774267 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.784012 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.791397 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.800883 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.807200 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.814410 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg"] Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.792907 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" path="/var/lib/kubelet/pods/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.793918 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294dabba-e6ac-404b-a3d4-0819c7baac6d" path="/var/lib/kubelet/pods/294dabba-e6ac-404b-a3d4-0819c7baac6d/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.794482 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="437db458-4fe0-4cf6-b23f-895ff57c27c0" path="/var/lib/kubelet/pods/437db458-4fe0-4cf6-b23f-895ff57c27c0/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.795095 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e02623-c543-47f0-8acc-cbf7a605ed34" path="/var/lib/kubelet/pods/71e02623-c543-47f0-8acc-cbf7a605ed34/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.796247 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740d6fa5-02d2-47b9-9d55-1cc790a3edad" path="/var/lib/kubelet/pods/740d6fa5-02d2-47b9-9d55-1cc790a3edad/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.796831 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" path="/var/lib/kubelet/pods/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.797332 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94267df6-5e7f-4409-a219-d42dabb28d43" path="/var/lib/kubelet/pods/94267df6-5e7f-4409-a219-d42dabb28d43/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.798295 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96e63b4-1388-49c6-a472-98bd5b480606" path="/var/lib/kubelet/pods/d96e63b4-1388-49c6-a472-98bd5b480606/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.798922 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07d5149-f4ed-41ce-9e12-9052a2a4772e" path="/var/lib/kubelet/pods/f07d5149-f4ed-41ce-9e12-9052a2a4772e/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.799572 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffbf410d-034d-4e44-a4fe-7146838c4cce" path="/var/lib/kubelet/pods/ffbf410d-034d-4e44-a4fe-7146838c4cce/volumes" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.183521 4739 scope.go:117] "RemoveContainer" containerID="f0bd777751e0cff4c69c0381a3a0ccff61702e8529245cab1d0ce1229ec7fa73" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.228181 4739 scope.go:117] "RemoveContainer" containerID="0065bcb6c308587c25b6b08589f22df1b02c5687fc1714c16c0a487c9d15d5b8" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.276274 4739 scope.go:117] "RemoveContainer" containerID="f815cbd4af2807d57aa7a3d16da322283c828b8a3f9071e839088ef748d47627" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.362734 4739 scope.go:117] "RemoveContainer" containerID="0ee79ebdfe1a75667f817da0116bf381fa0db6936107a920acd6ac58e38ce594" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.423587 4739 scope.go:117] "RemoveContainer" containerID="79dbeb2b6724e69669f51dec6142579531989356f8c20f251cceb9256942fad5" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.481931 4739 scope.go:117] "RemoveContainer" containerID="13e9cf0c879079f40a5f006abaf118346c98a33dca8ecefbb4ee7b456d3030bd" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.518352 4739 scope.go:117] "RemoveContainer" containerID="6ae8ebe0c529ae5370d5424cf29d3054323518397bc066b646d3ef1294f7be71" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.575450 4739 scope.go:117] "RemoveContainer" containerID="51d07f40482acab81b9632173fbbbfe5bbb70a28e7ce9e1f858999b12a002abd" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.675798 4739 scope.go:117] "RemoveContainer" containerID="13fef75ea95e51a1e876744f4cefce933c332610f033256bc38c5cbe442cbdc8" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.718407 4739 scope.go:117] "RemoveContainer" containerID="6aeb9960f615cc606b40429ab7fe43ecb9e61b07f34a7e412504580614aecdcb" Jan 21 16:07:50 crc kubenswrapper[4739]: I0121 16:07:50.783277 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:07:50 crc kubenswrapper[4739]: E0121 16:07:50.783921 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.930427 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm"] Jan 21 16:07:52 crc kubenswrapper[4739]: E0121 16:07:52.930857 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="extract-utilities" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.930872 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="extract-utilities" Jan 21 16:07:52 crc kubenswrapper[4739]: E0121 16:07:52.930915 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="registry-server" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.930923 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="registry-server" Jan 21 16:07:52 crc kubenswrapper[4739]: E0121 16:07:52.930941 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="extract-content" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.930952 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="extract-content" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.931148 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="registry-server" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.931797 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.937001 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.937306 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.937442 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.938197 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.938355 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.946674 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm"] Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.104084 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.104148 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.104181 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.104262 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.104343 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2rhd\" (UniqueName: \"kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.205356 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2rhd\" (UniqueName: \"kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.205463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.205502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.205527 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.205569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.212798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.212879 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.212961 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.213417 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.222644 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2rhd\" (UniqueName: \"kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.252095 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.807407 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm"] Jan 21 16:07:54 crc kubenswrapper[4739]: I0121 16:07:54.413475 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" event={"ID":"26f6f5f4-900a-4a62-af65-9a20d9b30008","Type":"ContainerStarted","Data":"3829c0ad4cc69ac3cad9c6a242b7b3681779174c602da61d4aab40d61646b5e6"} Jan 21 16:07:55 crc kubenswrapper[4739]: I0121 16:07:55.421342 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" event={"ID":"26f6f5f4-900a-4a62-af65-9a20d9b30008","Type":"ContainerStarted","Data":"be5e97510423a1c140cfd71d96c05eb72ecc71e24d9126631987e0eb733fc123"} Jan 21 16:07:55 crc kubenswrapper[4739]: I0121 16:07:55.441712 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" podStartSLOduration=2.811164931 podStartE2EDuration="3.441688415s" podCreationTimestamp="2026-01-21 16:07:52 +0000 UTC" firstStartedPulling="2026-01-21 16:07:53.811917851 +0000 UTC m=+2505.502624115" lastFinishedPulling="2026-01-21 16:07:54.442441335 +0000 UTC m=+2506.133147599" observedRunningTime="2026-01-21 16:07:55.437534242 +0000 UTC m=+2507.128240516" watchObservedRunningTime="2026-01-21 16:07:55.441688415 +0000 UTC m=+2507.132394679" Jan 21 16:08:05 crc kubenswrapper[4739]: I0121 16:08:05.784683 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:08:05 crc kubenswrapper[4739]: E0121 16:08:05.785660 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:08:11 crc kubenswrapper[4739]: I0121 16:08:11.573874 4739 generic.go:334] "Generic (PLEG): container finished" podID="26f6f5f4-900a-4a62-af65-9a20d9b30008" containerID="be5e97510423a1c140cfd71d96c05eb72ecc71e24d9126631987e0eb733fc123" exitCode=0 Jan 21 16:08:11 crc kubenswrapper[4739]: I0121 16:08:11.573989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" event={"ID":"26f6f5f4-900a-4a62-af65-9a20d9b30008","Type":"ContainerDied","Data":"be5e97510423a1c140cfd71d96c05eb72ecc71e24d9126631987e0eb733fc123"} Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:12.999886 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.102675 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph\") pod \"26f6f5f4-900a-4a62-af65-9a20d9b30008\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.102872 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2rhd\" (UniqueName: \"kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd\") pod \"26f6f5f4-900a-4a62-af65-9a20d9b30008\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.103693 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam\") pod \"26f6f5f4-900a-4a62-af65-9a20d9b30008\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.103743 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory\") pod \"26f6f5f4-900a-4a62-af65-9a20d9b30008\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.103777 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle\") pod \"26f6f5f4-900a-4a62-af65-9a20d9b30008\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.110507 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph" (OuterVolumeSpecName: "ceph") pod "26f6f5f4-900a-4a62-af65-9a20d9b30008" (UID: "26f6f5f4-900a-4a62-af65-9a20d9b30008"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.111996 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "26f6f5f4-900a-4a62-af65-9a20d9b30008" (UID: "26f6f5f4-900a-4a62-af65-9a20d9b30008"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.113092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd" (OuterVolumeSpecName: "kube-api-access-p2rhd") pod "26f6f5f4-900a-4a62-af65-9a20d9b30008" (UID: "26f6f5f4-900a-4a62-af65-9a20d9b30008"). InnerVolumeSpecName "kube-api-access-p2rhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.132688 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory" (OuterVolumeSpecName: "inventory") pod "26f6f5f4-900a-4a62-af65-9a20d9b30008" (UID: "26f6f5f4-900a-4a62-af65-9a20d9b30008"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.134385 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "26f6f5f4-900a-4a62-af65-9a20d9b30008" (UID: "26f6f5f4-900a-4a62-af65-9a20d9b30008"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.206060 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.206104 4739 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.206115 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.206125 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2rhd\" (UniqueName: \"kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.206134 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.592119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" event={"ID":"26f6f5f4-900a-4a62-af65-9a20d9b30008","Type":"ContainerDied","Data":"3829c0ad4cc69ac3cad9c6a242b7b3681779174c602da61d4aab40d61646b5e6"} Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.592164 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3829c0ad4cc69ac3cad9c6a242b7b3681779174c602da61d4aab40d61646b5e6" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.592231 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.700465 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b"] Jan 21 16:08:13 crc kubenswrapper[4739]: E0121 16:08:13.700957 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f6f5f4-900a-4a62-af65-9a20d9b30008" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.700983 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f6f5f4-900a-4a62-af65-9a20d9b30008" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.701204 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f6f5f4-900a-4a62-af65-9a20d9b30008" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.701962 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.706665 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.706972 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.707242 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.707505 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.707656 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.711201 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b"] Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.816574 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.816661 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.816710 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.816877 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b25ff\" (UniqueName: \"kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.816940 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.918982 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b25ff\" (UniqueName: \"kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.919310 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.919467 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.919543 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.919636 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.924568 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.924808 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.928338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.928569 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.938028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b25ff\" (UniqueName: \"kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:14 crc kubenswrapper[4739]: I0121 16:08:14.072738 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:14 crc kubenswrapper[4739]: I0121 16:08:14.821751 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b"] Jan 21 16:08:15 crc kubenswrapper[4739]: I0121 16:08:15.609731 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" event={"ID":"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97","Type":"ContainerStarted","Data":"ab159639b895c9064bd462ba13bbcc61ca13c343bfac49dc8e1f2b121803b44f"} Jan 21 16:08:15 crc kubenswrapper[4739]: I0121 16:08:15.610100 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" event={"ID":"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97","Type":"ContainerStarted","Data":"0eb8bcc48beb1bf5f5117358afca3a6623ecfde4edb96f6b77535a8966520d13"} Jan 21 16:08:15 crc kubenswrapper[4739]: I0121 16:08:15.632662 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" podStartSLOduration=2.106356722 podStartE2EDuration="2.632643394s" podCreationTimestamp="2026-01-21 16:08:13 +0000 UTC" firstStartedPulling="2026-01-21 16:08:14.820683192 +0000 UTC m=+2526.511389456" lastFinishedPulling="2026-01-21 16:08:15.346969864 +0000 UTC m=+2527.037676128" observedRunningTime="2026-01-21 16:08:15.626707981 +0000 UTC m=+2527.317414255" watchObservedRunningTime="2026-01-21 16:08:15.632643394 +0000 UTC m=+2527.323349668" Jan 21 16:08:16 crc kubenswrapper[4739]: I0121 16:08:16.785036 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:08:16 crc kubenswrapper[4739]: E0121 16:08:16.785626 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:08:30 crc kubenswrapper[4739]: I0121 16:08:30.783163 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:08:30 crc kubenswrapper[4739]: E0121 16:08:30.784173 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:08:43 crc kubenswrapper[4739]: I0121 16:08:43.783289 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:08:43 crc kubenswrapper[4739]: E0121 16:08:43.784123 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:08:54 crc kubenswrapper[4739]: I0121 16:08:54.783533 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:08:54 crc kubenswrapper[4739]: E0121 16:08:54.784315 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:09:05 crc kubenswrapper[4739]: I0121 16:09:05.783128 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:09:05 crc kubenswrapper[4739]: E0121 16:09:05.783956 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:09:16 crc kubenswrapper[4739]: I0121 16:09:16.783462 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:09:16 crc kubenswrapper[4739]: E0121 16:09:16.784318 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:09:30 crc kubenswrapper[4739]: I0121 16:09:30.783398 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:09:30 crc kubenswrapper[4739]: E0121 16:09:30.784211 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:09:43 crc kubenswrapper[4739]: I0121 16:09:43.782921 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:09:44 crc kubenswrapper[4739]: I0121 16:09:44.333753 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9"} Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.221041 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.223731 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.231497 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.337386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8vcb\" (UniqueName: \"kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.337492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.337564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.438775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8vcb\" (UniqueName: \"kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.438909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.438948 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.439546 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.440541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.462750 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8vcb\" (UniqueName: \"kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.605114 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:47 crc kubenswrapper[4739]: I0121 16:09:47.106179 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:09:47 crc kubenswrapper[4739]: I0121 16:09:47.363978 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerStarted","Data":"ef720b27b8fac81ce0c26590177a50b1d399fa1aa211dc28fd7129cffa243dee"} Jan 21 16:09:48 crc kubenswrapper[4739]: I0121 16:09:48.372634 4739 generic.go:334] "Generic (PLEG): container finished" podID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerID="b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59" exitCode=0 Jan 21 16:09:48 crc kubenswrapper[4739]: I0121 16:09:48.372709 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerDied","Data":"b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59"} Jan 21 16:09:48 crc kubenswrapper[4739]: I0121 16:09:48.374503 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:09:50 crc kubenswrapper[4739]: I0121 16:09:50.391083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerStarted","Data":"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4"} Jan 21 16:09:54 crc kubenswrapper[4739]: I0121 16:09:54.420960 4739 generic.go:334] "Generic (PLEG): container finished" podID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerID="43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4" exitCode=0 Jan 21 16:09:54 crc kubenswrapper[4739]: I0121 16:09:54.421040 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerDied","Data":"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4"} Jan 21 16:10:00 crc kubenswrapper[4739]: I0121 16:10:00.474000 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerStarted","Data":"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12"} Jan 21 16:10:00 crc kubenswrapper[4739]: I0121 16:10:00.497225 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xml2q" podStartSLOduration=3.5743218629999998 podStartE2EDuration="14.497205378s" podCreationTimestamp="2026-01-21 16:09:46 +0000 UTC" firstStartedPulling="2026-01-21 16:09:48.37422475 +0000 UTC m=+2620.064931014" lastFinishedPulling="2026-01-21 16:09:59.297108275 +0000 UTC m=+2630.987814529" observedRunningTime="2026-01-21 16:10:00.491691617 +0000 UTC m=+2632.182397901" watchObservedRunningTime="2026-01-21 16:10:00.497205378 +0000 UTC m=+2632.187911642" Jan 21 16:10:06 crc kubenswrapper[4739]: I0121 16:10:06.605554 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:06 crc kubenswrapper[4739]: I0121 16:10:06.606229 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:07 crc kubenswrapper[4739]: I0121 16:10:07.653270 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xml2q" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="registry-server" probeResult="failure" output=< Jan 21 16:10:07 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:10:07 crc kubenswrapper[4739]: > Jan 21 16:10:16 crc kubenswrapper[4739]: I0121 16:10:16.657350 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:16 crc kubenswrapper[4739]: I0121 16:10:16.716715 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:17 crc kubenswrapper[4739]: I0121 16:10:17.422525 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:10:17 crc kubenswrapper[4739]: I0121 16:10:17.855703 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xml2q" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="registry-server" containerID="cri-o://9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12" gracePeriod=2 Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.370327 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.523112 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities\") pod \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.523163 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content\") pod \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.523204 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8vcb\" (UniqueName: \"kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb\") pod \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.523969 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities" (OuterVolumeSpecName: "utilities") pod "e6026a4d-2c9d-45d8-868a-38ccc9959c37" (UID: "e6026a4d-2c9d-45d8-868a-38ccc9959c37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.529982 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb" (OuterVolumeSpecName: "kube-api-access-n8vcb") pod "e6026a4d-2c9d-45d8-868a-38ccc9959c37" (UID: "e6026a4d-2c9d-45d8-868a-38ccc9959c37"). InnerVolumeSpecName "kube-api-access-n8vcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.625771 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.625842 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8vcb\" (UniqueName: \"kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.649684 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6026a4d-2c9d-45d8-868a-38ccc9959c37" (UID: "e6026a4d-2c9d-45d8-868a-38ccc9959c37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.727588 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.865897 4739 generic.go:334] "Generic (PLEG): container finished" podID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerID="9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12" exitCode=0 Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.866038 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.866062 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerDied","Data":"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12"} Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.867058 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerDied","Data":"ef720b27b8fac81ce0c26590177a50b1d399fa1aa211dc28fd7129cffa243dee"} Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.867078 4739 scope.go:117] "RemoveContainer" containerID="9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.891965 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.893944 4739 scope.go:117] "RemoveContainer" containerID="43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.899377 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.915835 4739 scope.go:117] "RemoveContainer" containerID="b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.964910 4739 scope.go:117] "RemoveContainer" containerID="9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12" Jan 21 16:10:18 crc kubenswrapper[4739]: E0121 16:10:18.965649 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12\": container with ID starting with 9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12 not found: ID does not exist" containerID="9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.965763 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12"} err="failed to get container status \"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12\": rpc error: code = NotFound desc = could not find container \"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12\": container with ID starting with 9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12 not found: ID does not exist" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.965869 4739 scope.go:117] "RemoveContainer" containerID="43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4" Jan 21 16:10:18 crc kubenswrapper[4739]: E0121 16:10:18.966337 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4\": container with ID starting with 43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4 not found: ID does not exist" containerID="43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.966377 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4"} err="failed to get container status \"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4\": rpc error: code = NotFound desc = could not find container \"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4\": container with ID starting with 43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4 not found: ID does not exist" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.966415 4739 scope.go:117] "RemoveContainer" containerID="b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59" Jan 21 16:10:18 crc kubenswrapper[4739]: E0121 16:10:18.966757 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59\": container with ID starting with b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59 not found: ID does not exist" containerID="b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.966868 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59"} err="failed to get container status \"b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59\": rpc error: code = NotFound desc = could not find container \"b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59\": container with ID starting with b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59 not found: ID does not exist" Jan 21 16:10:20 crc kubenswrapper[4739]: I0121 16:10:20.800130 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" path="/var/lib/kubelet/pods/e6026a4d-2c9d-45d8-868a-38ccc9959c37/volumes" Jan 21 16:10:39 crc kubenswrapper[4739]: I0121 16:10:39.016518 4739 generic.go:334] "Generic (PLEG): container finished" podID="47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" containerID="ab159639b895c9064bd462ba13bbcc61ca13c343bfac49dc8e1f2b121803b44f" exitCode=0 Jan 21 16:10:39 crc kubenswrapper[4739]: I0121 16:10:39.017010 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" event={"ID":"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97","Type":"ContainerDied","Data":"ab159639b895c9064bd462ba13bbcc61ca13c343bfac49dc8e1f2b121803b44f"} Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.461489 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.569492 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph\") pod \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.569578 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam\") pod \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.569607 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b25ff\" (UniqueName: \"kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff\") pod \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.569718 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle\") pod \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.569762 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory\") pod \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.575886 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph" (OuterVolumeSpecName: "ceph") pod "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" (UID: "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.576010 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" (UID: "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.586201 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff" (OuterVolumeSpecName: "kube-api-access-b25ff") pod "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" (UID: "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97"). InnerVolumeSpecName "kube-api-access-b25ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.595082 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory" (OuterVolumeSpecName: "inventory") pod "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" (UID: "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.596796 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" (UID: "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.671791 4739 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.671845 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.671854 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.671863 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.671871 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b25ff\" (UniqueName: \"kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.037381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" event={"ID":"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97","Type":"ContainerDied","Data":"0eb8bcc48beb1bf5f5117358afca3a6623ecfde4edb96f6b77535a8966520d13"} Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.037446 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eb8bcc48beb1bf5f5117358afca3a6623ecfde4edb96f6b77535a8966520d13" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.037530 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.136560 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq"] Jan 21 16:10:41 crc kubenswrapper[4739]: E0121 16:10:41.137308 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="extract-utilities" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137351 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="extract-utilities" Jan 21 16:10:41 crc kubenswrapper[4739]: E0121 16:10:41.137384 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137397 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:10:41 crc kubenswrapper[4739]: E0121 16:10:41.137419 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="extract-content" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137430 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="extract-content" Jan 21 16:10:41 crc kubenswrapper[4739]: E0121 16:10:41.137458 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="registry-server" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137469 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="registry-server" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137716 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137746 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="registry-server" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.138416 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.142925 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.143311 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.144483 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.151637 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq"] Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.152067 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.152374 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.287182 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq4xg\" (UniqueName: \"kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.287339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.287492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.287835 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.389778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.389876 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq4xg\" (UniqueName: \"kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.389902 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.389937 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.409951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.410304 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.416497 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.447662 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq4xg\" (UniqueName: \"kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.504548 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:42 crc kubenswrapper[4739]: I0121 16:10:42.022661 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq"] Jan 21 16:10:42 crc kubenswrapper[4739]: I0121 16:10:42.046871 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" event={"ID":"9559d041-04b3-47c2-8121-b348ad047032","Type":"ContainerStarted","Data":"a9ce96325ecfbb4a937acf14445b67df51eaa303def7158b61bf911a6210e319"} Jan 21 16:10:43 crc kubenswrapper[4739]: I0121 16:10:43.057927 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" event={"ID":"9559d041-04b3-47c2-8121-b348ad047032","Type":"ContainerStarted","Data":"05c64f0740a6bab77942ae7b8973e963c2ac9515282b4306da4f7d1489750662"} Jan 21 16:10:43 crc kubenswrapper[4739]: I0121 16:10:43.076925 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" podStartSLOduration=1.601593424 podStartE2EDuration="2.076906289s" podCreationTimestamp="2026-01-21 16:10:41 +0000 UTC" firstStartedPulling="2026-01-21 16:10:42.027691818 +0000 UTC m=+2673.718398072" lastFinishedPulling="2026-01-21 16:10:42.503004673 +0000 UTC m=+2674.193710937" observedRunningTime="2026-01-21 16:10:43.076295202 +0000 UTC m=+2674.767001476" watchObservedRunningTime="2026-01-21 16:10:43.076906289 +0000 UTC m=+2674.767612553" Jan 21 16:11:12 crc kubenswrapper[4739]: I0121 16:11:12.286563 4739 generic.go:334] "Generic (PLEG): container finished" podID="9559d041-04b3-47c2-8121-b348ad047032" containerID="05c64f0740a6bab77942ae7b8973e963c2ac9515282b4306da4f7d1489750662" exitCode=0 Jan 21 16:11:12 crc kubenswrapper[4739]: I0121 16:11:12.286686 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" event={"ID":"9559d041-04b3-47c2-8121-b348ad047032","Type":"ContainerDied","Data":"05c64f0740a6bab77942ae7b8973e963c2ac9515282b4306da4f7d1489750662"} Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.762943 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.885922 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam\") pod \"9559d041-04b3-47c2-8121-b348ad047032\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.886002 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph\") pod \"9559d041-04b3-47c2-8121-b348ad047032\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.886156 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory\") pod \"9559d041-04b3-47c2-8121-b348ad047032\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.886246 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq4xg\" (UniqueName: \"kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg\") pod \"9559d041-04b3-47c2-8121-b348ad047032\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.891465 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg" (OuterVolumeSpecName: "kube-api-access-bq4xg") pod "9559d041-04b3-47c2-8121-b348ad047032" (UID: "9559d041-04b3-47c2-8121-b348ad047032"). InnerVolumeSpecName "kube-api-access-bq4xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.891687 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph" (OuterVolumeSpecName: "ceph") pod "9559d041-04b3-47c2-8121-b348ad047032" (UID: "9559d041-04b3-47c2-8121-b348ad047032"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.911529 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory" (OuterVolumeSpecName: "inventory") pod "9559d041-04b3-47c2-8121-b348ad047032" (UID: "9559d041-04b3-47c2-8121-b348ad047032"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.915519 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9559d041-04b3-47c2-8121-b348ad047032" (UID: "9559d041-04b3-47c2-8121-b348ad047032"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.988332 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.988382 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.988401 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.988420 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq4xg\" (UniqueName: \"kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.307924 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" event={"ID":"9559d041-04b3-47c2-8121-b348ad047032","Type":"ContainerDied","Data":"a9ce96325ecfbb4a937acf14445b67df51eaa303def7158b61bf911a6210e319"} Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.307967 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9ce96325ecfbb4a937acf14445b67df51eaa303def7158b61bf911a6210e319" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.308082 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.396051 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx"] Jan 21 16:11:14 crc kubenswrapper[4739]: E0121 16:11:14.396448 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9559d041-04b3-47c2-8121-b348ad047032" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.396469 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9559d041-04b3-47c2-8121-b348ad047032" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.396685 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9559d041-04b3-47c2-8121-b348ad047032" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.397294 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.398936 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.400096 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.400247 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.402699 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.406241 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx"] Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.409679 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.596937 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.596983 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.597036 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkkjp\" (UniqueName: \"kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.597439 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.699676 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.699768 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.699809 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.699892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkkjp\" (UniqueName: \"kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.704958 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.709293 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.709418 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.717319 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkkjp\" (UniqueName: \"kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:15 crc kubenswrapper[4739]: I0121 16:11:15.013340 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:15 crc kubenswrapper[4739]: I0121 16:11:15.566277 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx"] Jan 21 16:11:16 crc kubenswrapper[4739]: I0121 16:11:16.325177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" event={"ID":"e70c9a47-9608-42ee-b307-be70bb44d50b","Type":"ContainerStarted","Data":"a7cd27ce1caaa8ea48e581c1ef1a214d290cf4d88b3419aa39ddf9501c158627"} Jan 21 16:11:16 crc kubenswrapper[4739]: I0121 16:11:16.325493 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" event={"ID":"e70c9a47-9608-42ee-b307-be70bb44d50b","Type":"ContainerStarted","Data":"8c04f76bf5f7b8a01289865fafc409fa083e554bf5b04945b4663ce2e3725e83"} Jan 21 16:11:16 crc kubenswrapper[4739]: I0121 16:11:16.347760 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" podStartSLOduration=1.890958469 podStartE2EDuration="2.347734477s" podCreationTimestamp="2026-01-21 16:11:14 +0000 UTC" firstStartedPulling="2026-01-21 16:11:15.569693222 +0000 UTC m=+2707.260399486" lastFinishedPulling="2026-01-21 16:11:16.02646924 +0000 UTC m=+2707.717175494" observedRunningTime="2026-01-21 16:11:16.342240448 +0000 UTC m=+2708.032946722" watchObservedRunningTime="2026-01-21 16:11:16.347734477 +0000 UTC m=+2708.038440741" Jan 21 16:11:21 crc kubenswrapper[4739]: I0121 16:11:21.364794 4739 generic.go:334] "Generic (PLEG): container finished" podID="e70c9a47-9608-42ee-b307-be70bb44d50b" containerID="a7cd27ce1caaa8ea48e581c1ef1a214d290cf4d88b3419aa39ddf9501c158627" exitCode=0 Jan 21 16:11:21 crc kubenswrapper[4739]: I0121 16:11:21.364982 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" event={"ID":"e70c9a47-9608-42ee-b307-be70bb44d50b","Type":"ContainerDied","Data":"a7cd27ce1caaa8ea48e581c1ef1a214d290cf4d88b3419aa39ddf9501c158627"} Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.873578 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.955778 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory\") pod \"e70c9a47-9608-42ee-b307-be70bb44d50b\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.955950 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkkjp\" (UniqueName: \"kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp\") pod \"e70c9a47-9608-42ee-b307-be70bb44d50b\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.956053 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph\") pod \"e70c9a47-9608-42ee-b307-be70bb44d50b\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.956120 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam\") pod \"e70c9a47-9608-42ee-b307-be70bb44d50b\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.962711 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp" (OuterVolumeSpecName: "kube-api-access-hkkjp") pod "e70c9a47-9608-42ee-b307-be70bb44d50b" (UID: "e70c9a47-9608-42ee-b307-be70bb44d50b"). InnerVolumeSpecName "kube-api-access-hkkjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.963339 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph" (OuterVolumeSpecName: "ceph") pod "e70c9a47-9608-42ee-b307-be70bb44d50b" (UID: "e70c9a47-9608-42ee-b307-be70bb44d50b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.983774 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e70c9a47-9608-42ee-b307-be70bb44d50b" (UID: "e70c9a47-9608-42ee-b307-be70bb44d50b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.993010 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory" (OuterVolumeSpecName: "inventory") pod "e70c9a47-9608-42ee-b307-be70bb44d50b" (UID: "e70c9a47-9608-42ee-b307-be70bb44d50b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.058710 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.059117 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkkjp\" (UniqueName: \"kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.059135 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.059147 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.382288 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" event={"ID":"e70c9a47-9608-42ee-b307-be70bb44d50b","Type":"ContainerDied","Data":"8c04f76bf5f7b8a01289865fafc409fa083e554bf5b04945b4663ce2e3725e83"} Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.382343 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c04f76bf5f7b8a01289865fafc409fa083e554bf5b04945b4663ce2e3725e83" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.382359 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.459689 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt"] Jan 21 16:11:23 crc kubenswrapper[4739]: E0121 16:11:23.460366 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70c9a47-9608-42ee-b307-be70bb44d50b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.460462 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70c9a47-9608-42ee-b307-be70bb44d50b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.460723 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e70c9a47-9608-42ee-b307-be70bb44d50b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.461458 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.464217 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.465747 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.465897 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.466036 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r49hn\" (UniqueName: \"kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.466181 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.467054 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.467333 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.467335 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.471100 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.476887 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt"] Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.566939 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.566995 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.567034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r49hn\" (UniqueName: \"kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.567103 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.572689 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.572771 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.573444 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.588596 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r49hn\" (UniqueName: \"kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.782608 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:24 crc kubenswrapper[4739]: I0121 16:11:24.333835 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt"] Jan 21 16:11:24 crc kubenswrapper[4739]: I0121 16:11:24.389420 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" event={"ID":"863214f8-2df5-42e2-ba92-293df6d7adaf","Type":"ContainerStarted","Data":"af3c417fba31404685b1e284029eacee817136f790dcb6362a0e8804b59ba8e2"} Jan 21 16:11:25 crc kubenswrapper[4739]: I0121 16:11:25.399386 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" event={"ID":"863214f8-2df5-42e2-ba92-293df6d7adaf","Type":"ContainerStarted","Data":"8bcd6f2ab412b6fca609f47a18a66ac8aaff30f9eb314e02c406154a74f14304"} Jan 21 16:11:25 crc kubenswrapper[4739]: I0121 16:11:25.423316 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" podStartSLOduration=2.033602618 podStartE2EDuration="2.423287658s" podCreationTimestamp="2026-01-21 16:11:23 +0000 UTC" firstStartedPulling="2026-01-21 16:11:24.336240201 +0000 UTC m=+2716.026946465" lastFinishedPulling="2026-01-21 16:11:24.725925241 +0000 UTC m=+2716.416631505" observedRunningTime="2026-01-21 16:11:25.41496563 +0000 UTC m=+2717.105671904" watchObservedRunningTime="2026-01-21 16:11:25.423287658 +0000 UTC m=+2717.113993922" Jan 21 16:11:58 crc kubenswrapper[4739]: I0121 16:11:58.987702 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:11:58 crc kubenswrapper[4739]: I0121 16:11:58.993980 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.001159 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.049403 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjvq\" (UniqueName: \"kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.049546 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.049576 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.152395 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.152469 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.152619 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mjvq\" (UniqueName: \"kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.152975 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.153089 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.173588 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mjvq\" (UniqueName: \"kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.366179 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.896843 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:12:00 crc kubenswrapper[4739]: I0121 16:12:00.663809 4739 generic.go:334] "Generic (PLEG): container finished" podID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerID="fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79" exitCode=0 Jan 21 16:12:00 crc kubenswrapper[4739]: I0121 16:12:00.663986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerDied","Data":"fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79"} Jan 21 16:12:00 crc kubenswrapper[4739]: I0121 16:12:00.664388 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerStarted","Data":"6e5f2cdd76319a7b91e21c06ee6f3162453eb854b39c4e28f0790998c1696ad2"} Jan 21 16:12:02 crc kubenswrapper[4739]: I0121 16:12:02.686977 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerStarted","Data":"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae"} Jan 21 16:12:03 crc kubenswrapper[4739]: I0121 16:12:03.696283 4739 generic.go:334] "Generic (PLEG): container finished" podID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerID="d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae" exitCode=0 Jan 21 16:12:03 crc kubenswrapper[4739]: I0121 16:12:03.696329 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerDied","Data":"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae"} Jan 21 16:12:05 crc kubenswrapper[4739]: I0121 16:12:05.222575 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:12:05 crc kubenswrapper[4739]: I0121 16:12:05.223196 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:12:05 crc kubenswrapper[4739]: I0121 16:12:05.717146 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerStarted","Data":"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca"} Jan 21 16:12:05 crc kubenswrapper[4739]: I0121 16:12:05.747127 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t6tlm" podStartSLOduration=3.570092112 podStartE2EDuration="7.747104193s" podCreationTimestamp="2026-01-21 16:11:58 +0000 UTC" firstStartedPulling="2026-01-21 16:12:00.665712917 +0000 UTC m=+2752.356419181" lastFinishedPulling="2026-01-21 16:12:04.842724998 +0000 UTC m=+2756.533431262" observedRunningTime="2026-01-21 16:12:05.739987188 +0000 UTC m=+2757.430693462" watchObservedRunningTime="2026-01-21 16:12:05.747104193 +0000 UTC m=+2757.437810467" Jan 21 16:12:08 crc kubenswrapper[4739]: I0121 16:12:08.742117 4739 generic.go:334] "Generic (PLEG): container finished" podID="863214f8-2df5-42e2-ba92-293df6d7adaf" containerID="8bcd6f2ab412b6fca609f47a18a66ac8aaff30f9eb314e02c406154a74f14304" exitCode=0 Jan 21 16:12:08 crc kubenswrapper[4739]: I0121 16:12:08.742461 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" event={"ID":"863214f8-2df5-42e2-ba92-293df6d7adaf","Type":"ContainerDied","Data":"8bcd6f2ab412b6fca609f47a18a66ac8aaff30f9eb314e02c406154a74f14304"} Jan 21 16:12:09 crc kubenswrapper[4739]: I0121 16:12:09.367065 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:09 crc kubenswrapper[4739]: I0121 16:12:09.368163 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:09 crc kubenswrapper[4739]: I0121 16:12:09.414465 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.321368 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.359694 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r49hn\" (UniqueName: \"kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn\") pod \"863214f8-2df5-42e2-ba92-293df6d7adaf\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.359835 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory\") pod \"863214f8-2df5-42e2-ba92-293df6d7adaf\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.359967 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam\") pod \"863214f8-2df5-42e2-ba92-293df6d7adaf\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.360003 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph\") pod \"863214f8-2df5-42e2-ba92-293df6d7adaf\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.381995 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn" (OuterVolumeSpecName: "kube-api-access-r49hn") pod "863214f8-2df5-42e2-ba92-293df6d7adaf" (UID: "863214f8-2df5-42e2-ba92-293df6d7adaf"). InnerVolumeSpecName "kube-api-access-r49hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.399998 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph" (OuterVolumeSpecName: "ceph") pod "863214f8-2df5-42e2-ba92-293df6d7adaf" (UID: "863214f8-2df5-42e2-ba92-293df6d7adaf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.462264 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.462297 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r49hn\" (UniqueName: \"kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.476387 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "863214f8-2df5-42e2-ba92-293df6d7adaf" (UID: "863214f8-2df5-42e2-ba92-293df6d7adaf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.476499 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory" (OuterVolumeSpecName: "inventory") pod "863214f8-2df5-42e2-ba92-293df6d7adaf" (UID: "863214f8-2df5-42e2-ba92-293df6d7adaf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.564872 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.564906 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.759185 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" event={"ID":"863214f8-2df5-42e2-ba92-293df6d7adaf","Type":"ContainerDied","Data":"af3c417fba31404685b1e284029eacee817136f790dcb6362a0e8804b59ba8e2"} Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.759505 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af3c417fba31404685b1e284029eacee817136f790dcb6362a0e8804b59ba8e2" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.759218 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.814041 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.877266 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg"] Jan 21 16:12:10 crc kubenswrapper[4739]: E0121 16:12:10.878237 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863214f8-2df5-42e2-ba92-293df6d7adaf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.878316 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="863214f8-2df5-42e2-ba92-293df6d7adaf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.878676 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="863214f8-2df5-42e2-ba92-293df6d7adaf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.879662 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.891270 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.891359 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.891641 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.891660 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.892006 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.902963 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg"] Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.073313 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gsn\" (UniqueName: \"kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.073390 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.073867 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.074378 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.176224 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.176309 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.176366 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6gsn\" (UniqueName: \"kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.176413 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.181681 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.182442 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.186718 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.200789 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6gsn\" (UniqueName: \"kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.203246 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.779227 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg"] Jan 21 16:12:12 crc kubenswrapper[4739]: I0121 16:12:12.796560 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" event={"ID":"1b774039-a2a8-4a04-9436-570c76bb8852","Type":"ContainerStarted","Data":"55aac2b92df8f1e5c8df1239eb718a6412fb520f0d73aa05504c88e70a1b226f"} Jan 21 16:12:12 crc kubenswrapper[4739]: I0121 16:12:12.796897 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" event={"ID":"1b774039-a2a8-4a04-9436-570c76bb8852","Type":"ContainerStarted","Data":"e353585928a39cd898bfb45d0db1292da4b6384f398dd152fe121ab37ff801c9"} Jan 21 16:12:12 crc kubenswrapper[4739]: I0121 16:12:12.826919 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" podStartSLOduration=2.115339376 podStartE2EDuration="2.826900711s" podCreationTimestamp="2026-01-21 16:12:10 +0000 UTC" firstStartedPulling="2026-01-21 16:12:11.774139703 +0000 UTC m=+2763.464845967" lastFinishedPulling="2026-01-21 16:12:12.485701018 +0000 UTC m=+2764.176407302" observedRunningTime="2026-01-21 16:12:12.816322321 +0000 UTC m=+2764.507028615" watchObservedRunningTime="2026-01-21 16:12:12.826900711 +0000 UTC m=+2764.517606975" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.052304 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.053016 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t6tlm" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="registry-server" containerID="cri-o://67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca" gracePeriod=2 Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.588647 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.652633 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mjvq\" (UniqueName: \"kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq\") pod \"d201a396-e0b5-4319-9309-7a28ac213a4f\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.652834 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities\") pod \"d201a396-e0b5-4319-9309-7a28ac213a4f\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.652896 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content\") pod \"d201a396-e0b5-4319-9309-7a28ac213a4f\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.654286 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities" (OuterVolumeSpecName: "utilities") pod "d201a396-e0b5-4319-9309-7a28ac213a4f" (UID: "d201a396-e0b5-4319-9309-7a28ac213a4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.660121 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq" (OuterVolumeSpecName: "kube-api-access-6mjvq") pod "d201a396-e0b5-4319-9309-7a28ac213a4f" (UID: "d201a396-e0b5-4319-9309-7a28ac213a4f"). InnerVolumeSpecName "kube-api-access-6mjvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.683031 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d201a396-e0b5-4319-9309-7a28ac213a4f" (UID: "d201a396-e0b5-4319-9309-7a28ac213a4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.755250 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mjvq\" (UniqueName: \"kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.755286 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.755296 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.805176 4739 generic.go:334] "Generic (PLEG): container finished" podID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerID="67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca" exitCode=0 Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.805988 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.809939 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerDied","Data":"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca"} Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.810030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerDied","Data":"6e5f2cdd76319a7b91e21c06ee6f3162453eb854b39c4e28f0790998c1696ad2"} Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.810051 4739 scope.go:117] "RemoveContainer" containerID="67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.834796 4739 scope.go:117] "RemoveContainer" containerID="d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.841786 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.852023 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.859986 4739 scope.go:117] "RemoveContainer" containerID="fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.901938 4739 scope.go:117] "RemoveContainer" containerID="67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca" Jan 21 16:12:13 crc kubenswrapper[4739]: E0121 16:12:13.902837 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca\": container with ID starting with 67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca not found: ID does not exist" containerID="67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.902868 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca"} err="failed to get container status \"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca\": rpc error: code = NotFound desc = could not find container \"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca\": container with ID starting with 67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca not found: ID does not exist" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.902890 4739 scope.go:117] "RemoveContainer" containerID="d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae" Jan 21 16:12:13 crc kubenswrapper[4739]: E0121 16:12:13.903416 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae\": container with ID starting with d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae not found: ID does not exist" containerID="d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.903467 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae"} err="failed to get container status \"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae\": rpc error: code = NotFound desc = could not find container \"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae\": container with ID starting with d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae not found: ID does not exist" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.903500 4739 scope.go:117] "RemoveContainer" containerID="fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79" Jan 21 16:12:13 crc kubenswrapper[4739]: E0121 16:12:13.903806 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79\": container with ID starting with fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79 not found: ID does not exist" containerID="fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.903888 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79"} err="failed to get container status \"fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79\": rpc error: code = NotFound desc = could not find container \"fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79\": container with ID starting with fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79 not found: ID does not exist" Jan 21 16:12:14 crc kubenswrapper[4739]: I0121 16:12:14.797731 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" path="/var/lib/kubelet/pods/d201a396-e0b5-4319-9309-7a28ac213a4f/volumes" Jan 21 16:12:17 crc kubenswrapper[4739]: I0121 16:12:17.846582 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b774039-a2a8-4a04-9436-570c76bb8852" containerID="55aac2b92df8f1e5c8df1239eb718a6412fb520f0d73aa05504c88e70a1b226f" exitCode=0 Jan 21 16:12:17 crc kubenswrapper[4739]: I0121 16:12:17.846686 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" event={"ID":"1b774039-a2a8-4a04-9436-570c76bb8852","Type":"ContainerDied","Data":"55aac2b92df8f1e5c8df1239eb718a6412fb520f0d73aa05504c88e70a1b226f"} Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.248382 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.366395 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory\") pod \"1b774039-a2a8-4a04-9436-570c76bb8852\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.366489 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam\") pod \"1b774039-a2a8-4a04-9436-570c76bb8852\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.366646 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6gsn\" (UniqueName: \"kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn\") pod \"1b774039-a2a8-4a04-9436-570c76bb8852\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.366721 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph\") pod \"1b774039-a2a8-4a04-9436-570c76bb8852\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.380123 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph" (OuterVolumeSpecName: "ceph") pod "1b774039-a2a8-4a04-9436-570c76bb8852" (UID: "1b774039-a2a8-4a04-9436-570c76bb8852"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.380208 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn" (OuterVolumeSpecName: "kube-api-access-j6gsn") pod "1b774039-a2a8-4a04-9436-570c76bb8852" (UID: "1b774039-a2a8-4a04-9436-570c76bb8852"). InnerVolumeSpecName "kube-api-access-j6gsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.393236 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory" (OuterVolumeSpecName: "inventory") pod "1b774039-a2a8-4a04-9436-570c76bb8852" (UID: "1b774039-a2a8-4a04-9436-570c76bb8852"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.394075 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1b774039-a2a8-4a04-9436-570c76bb8852" (UID: "1b774039-a2a8-4a04-9436-570c76bb8852"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.468308 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.468339 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.468349 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6gsn\" (UniqueName: \"kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.468358 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.878293 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" event={"ID":"1b774039-a2a8-4a04-9436-570c76bb8852","Type":"ContainerDied","Data":"e353585928a39cd898bfb45d0db1292da4b6384f398dd152fe121ab37ff801c9"} Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.878332 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e353585928a39cd898bfb45d0db1292da4b6384f398dd152fe121ab37ff801c9" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.878400 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.973375 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8"] Jan 21 16:12:19 crc kubenswrapper[4739]: E0121 16:12:19.973786 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="extract-content" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.973807 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="extract-content" Jan 21 16:12:19 crc kubenswrapper[4739]: E0121 16:12:19.974467 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="extract-utilities" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.974487 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="extract-utilities" Jan 21 16:12:19 crc kubenswrapper[4739]: E0121 16:12:19.974500 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="registry-server" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.974508 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="registry-server" Jan 21 16:12:19 crc kubenswrapper[4739]: E0121 16:12:19.974521 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b774039-a2a8-4a04-9436-570c76bb8852" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.974543 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b774039-a2a8-4a04-9436-570c76bb8852" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.974756 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="registry-server" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.974784 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b774039-a2a8-4a04-9436-570c76bb8852" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.975544 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.978483 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.979973 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.980382 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.980478 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.980652 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:19.992330 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8"] Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.080337 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.080498 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.080535 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh4vt\" (UniqueName: \"kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.080556 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.181806 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.182142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh4vt\" (UniqueName: \"kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.182166 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.182190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.185508 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.185941 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.199241 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.205536 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh4vt\" (UniqueName: \"kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.322908 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.821765 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8"] Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.886476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" event={"ID":"c9b66501-25d1-48dd-a7ad-9b98893bcede","Type":"ContainerStarted","Data":"ad73bb09d09551834f139863426a3a758b641fa72939e53261391c7e804ca143"} Jan 21 16:12:22 crc kubenswrapper[4739]: I0121 16:12:22.901844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" event={"ID":"c9b66501-25d1-48dd-a7ad-9b98893bcede","Type":"ContainerStarted","Data":"ba1a3f45e6942ec782adbd3ec9d7df6600047096d986e3f8d0d21e1384c174c9"} Jan 21 16:12:22 crc kubenswrapper[4739]: I0121 16:12:22.919008 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" podStartSLOduration=2.166453006 podStartE2EDuration="3.918987456s" podCreationTimestamp="2026-01-21 16:12:19 +0000 UTC" firstStartedPulling="2026-01-21 16:12:20.835859394 +0000 UTC m=+2772.526565658" lastFinishedPulling="2026-01-21 16:12:22.588393834 +0000 UTC m=+2774.279100108" observedRunningTime="2026-01-21 16:12:22.915285326 +0000 UTC m=+2774.605991590" watchObservedRunningTime="2026-01-21 16:12:22.918987456 +0000 UTC m=+2774.609693720" Jan 21 16:12:35 crc kubenswrapper[4739]: I0121 16:12:35.222840 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:12:35 crc kubenswrapper[4739]: I0121 16:12:35.223277 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:13:05 crc kubenswrapper[4739]: I0121 16:13:05.222771 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:13:05 crc kubenswrapper[4739]: I0121 16:13:05.223404 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:13:05 crc kubenswrapper[4739]: I0121 16:13:05.223460 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:13:05 crc kubenswrapper[4739]: I0121 16:13:05.223992 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:13:05 crc kubenswrapper[4739]: I0121 16:13:05.224048 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9" gracePeriod=600 Jan 21 16:13:06 crc kubenswrapper[4739]: I0121 16:13:06.264064 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9" exitCode=0 Jan 21 16:13:06 crc kubenswrapper[4739]: I0121 16:13:06.264138 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9"} Jan 21 16:13:06 crc kubenswrapper[4739]: I0121 16:13:06.264765 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2"} Jan 21 16:13:06 crc kubenswrapper[4739]: I0121 16:13:06.264794 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:13:09 crc kubenswrapper[4739]: I0121 16:13:09.296878 4739 generic.go:334] "Generic (PLEG): container finished" podID="c9b66501-25d1-48dd-a7ad-9b98893bcede" containerID="ba1a3f45e6942ec782adbd3ec9d7df6600047096d986e3f8d0d21e1384c174c9" exitCode=0 Jan 21 16:13:09 crc kubenswrapper[4739]: I0121 16:13:09.297100 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" event={"ID":"c9b66501-25d1-48dd-a7ad-9b98893bcede","Type":"ContainerDied","Data":"ba1a3f45e6942ec782adbd3ec9d7df6600047096d986e3f8d0d21e1384c174c9"} Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.766050 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.843275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory\") pod \"c9b66501-25d1-48dd-a7ad-9b98893bcede\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.843753 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam\") pod \"c9b66501-25d1-48dd-a7ad-9b98893bcede\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.843788 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph\") pod \"c9b66501-25d1-48dd-a7ad-9b98893bcede\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.843836 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh4vt\" (UniqueName: \"kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt\") pod \"c9b66501-25d1-48dd-a7ad-9b98893bcede\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.853515 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt" (OuterVolumeSpecName: "kube-api-access-wh4vt") pod "c9b66501-25d1-48dd-a7ad-9b98893bcede" (UID: "c9b66501-25d1-48dd-a7ad-9b98893bcede"). InnerVolumeSpecName "kube-api-access-wh4vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.856308 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph" (OuterVolumeSpecName: "ceph") pod "c9b66501-25d1-48dd-a7ad-9b98893bcede" (UID: "c9b66501-25d1-48dd-a7ad-9b98893bcede"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.878233 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory" (OuterVolumeSpecName: "inventory") pod "c9b66501-25d1-48dd-a7ad-9b98893bcede" (UID: "c9b66501-25d1-48dd-a7ad-9b98893bcede"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.901683 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9b66501-25d1-48dd-a7ad-9b98893bcede" (UID: "c9b66501-25d1-48dd-a7ad-9b98893bcede"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.946510 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.946558 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.946573 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh4vt\" (UniqueName: \"kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.946584 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.320200 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" event={"ID":"c9b66501-25d1-48dd-a7ad-9b98893bcede","Type":"ContainerDied","Data":"ad73bb09d09551834f139863426a3a758b641fa72939e53261391c7e804ca143"} Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.320518 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad73bb09d09551834f139863426a3a758b641fa72939e53261391c7e804ca143" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.320339 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.416417 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xkcn4"] Jan 21 16:13:11 crc kubenswrapper[4739]: E0121 16:13:11.417273 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b66501-25d1-48dd-a7ad-9b98893bcede" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.417373 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b66501-25d1-48dd-a7ad-9b98893bcede" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.417672 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b66501-25d1-48dd-a7ad-9b98893bcede" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.418549 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.421567 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.421804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.421964 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.422101 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.422270 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.439661 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xkcn4"] Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.559328 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.559405 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.559463 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnfh\" (UniqueName: \"kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.559663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.661568 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.661712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.661756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.661795 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfnfh\" (UniqueName: \"kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.666495 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.667217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.667794 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.681291 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfnfh\" (UniqueName: \"kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.736384 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:12 crc kubenswrapper[4739]: I0121 16:13:12.251028 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xkcn4"] Jan 21 16:13:12 crc kubenswrapper[4739]: I0121 16:13:12.329527 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" event={"ID":"c9035d12-0cb2-4d4c-a202-984fdb561167","Type":"ContainerStarted","Data":"42c5a7a5593c1bfb3bc9c49edf9a1cfbf8e7631fd2c08fd078bf977c8db660da"} Jan 21 16:13:14 crc kubenswrapper[4739]: I0121 16:13:14.348656 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" event={"ID":"c9035d12-0cb2-4d4c-a202-984fdb561167","Type":"ContainerStarted","Data":"18249468eae7c3be7755165d9cbf94c2a0eae657ff7ddf8754da006e42113c8c"} Jan 21 16:13:14 crc kubenswrapper[4739]: I0121 16:13:14.368581 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" podStartSLOduration=2.877446021 podStartE2EDuration="3.368559865s" podCreationTimestamp="2026-01-21 16:13:11 +0000 UTC" firstStartedPulling="2026-01-21 16:13:12.253137672 +0000 UTC m=+2823.943843936" lastFinishedPulling="2026-01-21 16:13:12.744251516 +0000 UTC m=+2824.434957780" observedRunningTime="2026-01-21 16:13:14.363772225 +0000 UTC m=+2826.054478509" watchObservedRunningTime="2026-01-21 16:13:14.368559865 +0000 UTC m=+2826.059266129" Jan 21 16:13:23 crc kubenswrapper[4739]: I0121 16:13:23.419710 4739 generic.go:334] "Generic (PLEG): container finished" podID="c9035d12-0cb2-4d4c-a202-984fdb561167" containerID="18249468eae7c3be7755165d9cbf94c2a0eae657ff7ddf8754da006e42113c8c" exitCode=0 Jan 21 16:13:23 crc kubenswrapper[4739]: I0121 16:13:23.419775 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" event={"ID":"c9035d12-0cb2-4d4c-a202-984fdb561167","Type":"ContainerDied","Data":"18249468eae7c3be7755165d9cbf94c2a0eae657ff7ddf8754da006e42113c8c"} Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.800177 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.916345 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam\") pod \"c9035d12-0cb2-4d4c-a202-984fdb561167\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.916396 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfnfh\" (UniqueName: \"kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh\") pod \"c9035d12-0cb2-4d4c-a202-984fdb561167\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.916560 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph\") pod \"c9035d12-0cb2-4d4c-a202-984fdb561167\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.916596 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0\") pod \"c9035d12-0cb2-4d4c-a202-984fdb561167\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.922516 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph" (OuterVolumeSpecName: "ceph") pod "c9035d12-0cb2-4d4c-a202-984fdb561167" (UID: "c9035d12-0cb2-4d4c-a202-984fdb561167"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.922534 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh" (OuterVolumeSpecName: "kube-api-access-mfnfh") pod "c9035d12-0cb2-4d4c-a202-984fdb561167" (UID: "c9035d12-0cb2-4d4c-a202-984fdb561167"). InnerVolumeSpecName "kube-api-access-mfnfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.942426 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9035d12-0cb2-4d4c-a202-984fdb561167" (UID: "c9035d12-0cb2-4d4c-a202-984fdb561167"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.944864 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c9035d12-0cb2-4d4c-a202-984fdb561167" (UID: "c9035d12-0cb2-4d4c-a202-984fdb561167"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.018998 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.019254 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfnfh\" (UniqueName: \"kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.019352 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.019432 4739 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.440173 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" event={"ID":"c9035d12-0cb2-4d4c-a202-984fdb561167","Type":"ContainerDied","Data":"42c5a7a5593c1bfb3bc9c49edf9a1cfbf8e7631fd2c08fd078bf977c8db660da"} Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.440213 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42c5a7a5593c1bfb3bc9c49edf9a1cfbf8e7631fd2c08fd078bf977c8db660da" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.440251 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.518674 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s"] Jan 21 16:13:25 crc kubenswrapper[4739]: E0121 16:13:25.519268 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9035d12-0cb2-4d4c-a202-984fdb561167" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.519284 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9035d12-0cb2-4d4c-a202-984fdb561167" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.519468 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9035d12-0cb2-4d4c-a202-984fdb561167" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.520142 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526151 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526179 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526405 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526507 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526518 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526764 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s"] Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.631402 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.631466 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.631491 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldhwb\" (UniqueName: \"kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.631527 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.732619 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.732708 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.732733 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldhwb\" (UniqueName: \"kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.732778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.739615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.740056 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.744861 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.759881 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldhwb\" (UniqueName: \"kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.839568 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:26 crc kubenswrapper[4739]: I0121 16:13:26.177869 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s"] Jan 21 16:13:26 crc kubenswrapper[4739]: I0121 16:13:26.448848 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" event={"ID":"056d99bf-bfdf-40d6-b888-0390a1674524","Type":"ContainerStarted","Data":"983a9c0eb79b44df988a3fd289c100d516a9c3a9b637ffa561fa8de73e85fc5c"} Jan 21 16:13:27 crc kubenswrapper[4739]: I0121 16:13:27.459227 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" event={"ID":"056d99bf-bfdf-40d6-b888-0390a1674524","Type":"ContainerStarted","Data":"0d3e2e1ef1cf9d80da7366c44567633b0e39f9ac02490d1e4306e606cec379e9"} Jan 21 16:13:27 crc kubenswrapper[4739]: I0121 16:13:27.484259 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" podStartSLOduration=1.724581599 podStartE2EDuration="2.484238324s" podCreationTimestamp="2026-01-21 16:13:25 +0000 UTC" firstStartedPulling="2026-01-21 16:13:26.182086806 +0000 UTC m=+2837.872793070" lastFinishedPulling="2026-01-21 16:13:26.941743531 +0000 UTC m=+2838.632449795" observedRunningTime="2026-01-21 16:13:27.474160677 +0000 UTC m=+2839.164866941" watchObservedRunningTime="2026-01-21 16:13:27.484238324 +0000 UTC m=+2839.174944588" Jan 21 16:13:36 crc kubenswrapper[4739]: I0121 16:13:36.531497 4739 generic.go:334] "Generic (PLEG): container finished" podID="056d99bf-bfdf-40d6-b888-0390a1674524" containerID="0d3e2e1ef1cf9d80da7366c44567633b0e39f9ac02490d1e4306e606cec379e9" exitCode=0 Jan 21 16:13:36 crc kubenswrapper[4739]: I0121 16:13:36.531526 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" event={"ID":"056d99bf-bfdf-40d6-b888-0390a1674524","Type":"ContainerDied","Data":"0d3e2e1ef1cf9d80da7366c44567633b0e39f9ac02490d1e4306e606cec379e9"} Jan 21 16:13:37 crc kubenswrapper[4739]: I0121 16:13:37.962739 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.049612 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam\") pod \"056d99bf-bfdf-40d6-b888-0390a1674524\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.049684 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldhwb\" (UniqueName: \"kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb\") pod \"056d99bf-bfdf-40d6-b888-0390a1674524\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.049745 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph\") pod \"056d99bf-bfdf-40d6-b888-0390a1674524\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.049907 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory\") pod \"056d99bf-bfdf-40d6-b888-0390a1674524\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.059074 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb" (OuterVolumeSpecName: "kube-api-access-ldhwb") pod "056d99bf-bfdf-40d6-b888-0390a1674524" (UID: "056d99bf-bfdf-40d6-b888-0390a1674524"). InnerVolumeSpecName "kube-api-access-ldhwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.063407 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph" (OuterVolumeSpecName: "ceph") pod "056d99bf-bfdf-40d6-b888-0390a1674524" (UID: "056d99bf-bfdf-40d6-b888-0390a1674524"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.076432 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory" (OuterVolumeSpecName: "inventory") pod "056d99bf-bfdf-40d6-b888-0390a1674524" (UID: "056d99bf-bfdf-40d6-b888-0390a1674524"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.081073 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "056d99bf-bfdf-40d6-b888-0390a1674524" (UID: "056d99bf-bfdf-40d6-b888-0390a1674524"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.152320 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.152556 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.152650 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldhwb\" (UniqueName: \"kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.152720 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.547568 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" event={"ID":"056d99bf-bfdf-40d6-b888-0390a1674524","Type":"ContainerDied","Data":"983a9c0eb79b44df988a3fd289c100d516a9c3a9b637ffa561fa8de73e85fc5c"} Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.547909 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="983a9c0eb79b44df988a3fd289c100d516a9c3a9b637ffa561fa8de73e85fc5c" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.547613 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.655448 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv"] Jan 21 16:13:38 crc kubenswrapper[4739]: E0121 16:13:38.677006 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056d99bf-bfdf-40d6-b888-0390a1674524" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.677085 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="056d99bf-bfdf-40d6-b888-0390a1674524" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.677777 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="056d99bf-bfdf-40d6-b888-0390a1674524" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.678631 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv"] Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.678727 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.683451 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.683689 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.683753 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.683845 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.683926 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.768358 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.768414 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.768482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr7pj\" (UniqueName: \"kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.768559 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.870484 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr7pj\" (UniqueName: \"kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.870626 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.870702 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.870727 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.879760 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.879980 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.880312 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.886718 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr7pj\" (UniqueName: \"kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:39 crc kubenswrapper[4739]: I0121 16:13:39.005507 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:39 crc kubenswrapper[4739]: I0121 16:13:39.505593 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv"] Jan 21 16:13:39 crc kubenswrapper[4739]: I0121 16:13:39.554851 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" event={"ID":"1942d825-3f2c-4555-9212-4771283ad4cb","Type":"ContainerStarted","Data":"e034e1dbcde505d9bdcf0e3587dde0c311a39f2f62cfd61001ff40e501e91490"} Jan 21 16:13:40 crc kubenswrapper[4739]: I0121 16:13:40.573307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" event={"ID":"1942d825-3f2c-4555-9212-4771283ad4cb","Type":"ContainerStarted","Data":"5df6e1c867653eabc81eb295f4b9de4c9af3ba8a58156313443a84f4f6318bd2"} Jan 21 16:13:40 crc kubenswrapper[4739]: I0121 16:13:40.597280 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" podStartSLOduration=2.046022827 podStartE2EDuration="2.597259178s" podCreationTimestamp="2026-01-21 16:13:38 +0000 UTC" firstStartedPulling="2026-01-21 16:13:39.505729465 +0000 UTC m=+2851.196435729" lastFinishedPulling="2026-01-21 16:13:40.056965816 +0000 UTC m=+2851.747672080" observedRunningTime="2026-01-21 16:13:40.587030148 +0000 UTC m=+2852.277736422" watchObservedRunningTime="2026-01-21 16:13:40.597259178 +0000 UTC m=+2852.287965442" Jan 21 16:13:50 crc kubenswrapper[4739]: I0121 16:13:50.670251 4739 generic.go:334] "Generic (PLEG): container finished" podID="1942d825-3f2c-4555-9212-4771283ad4cb" containerID="5df6e1c867653eabc81eb295f4b9de4c9af3ba8a58156313443a84f4f6318bd2" exitCode=0 Jan 21 16:13:50 crc kubenswrapper[4739]: I0121 16:13:50.670309 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" event={"ID":"1942d825-3f2c-4555-9212-4771283ad4cb","Type":"ContainerDied","Data":"5df6e1c867653eabc81eb295f4b9de4c9af3ba8a58156313443a84f4f6318bd2"} Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.093694 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.220064 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam\") pod \"1942d825-3f2c-4555-9212-4771283ad4cb\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.220122 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph\") pod \"1942d825-3f2c-4555-9212-4771283ad4cb\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.220171 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr7pj\" (UniqueName: \"kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj\") pod \"1942d825-3f2c-4555-9212-4771283ad4cb\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.220243 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory\") pod \"1942d825-3f2c-4555-9212-4771283ad4cb\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.225118 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph" (OuterVolumeSpecName: "ceph") pod "1942d825-3f2c-4555-9212-4771283ad4cb" (UID: "1942d825-3f2c-4555-9212-4771283ad4cb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.225818 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj" (OuterVolumeSpecName: "kube-api-access-qr7pj") pod "1942d825-3f2c-4555-9212-4771283ad4cb" (UID: "1942d825-3f2c-4555-9212-4771283ad4cb"). InnerVolumeSpecName "kube-api-access-qr7pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.249992 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1942d825-3f2c-4555-9212-4771283ad4cb" (UID: "1942d825-3f2c-4555-9212-4771283ad4cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.255855 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory" (OuterVolumeSpecName: "inventory") pod "1942d825-3f2c-4555-9212-4771283ad4cb" (UID: "1942d825-3f2c-4555-9212-4771283ad4cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.322534 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.322568 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.322580 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr7pj\" (UniqueName: \"kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.322588 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.688605 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" event={"ID":"1942d825-3f2c-4555-9212-4771283ad4cb","Type":"ContainerDied","Data":"e034e1dbcde505d9bdcf0e3587dde0c311a39f2f62cfd61001ff40e501e91490"} Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.688658 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e034e1dbcde505d9bdcf0e3587dde0c311a39f2f62cfd61001ff40e501e91490" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.688720 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.780318 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp"] Jan 21 16:13:52 crc kubenswrapper[4739]: E0121 16:13:52.780668 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1942d825-3f2c-4555-9212-4771283ad4cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.780683 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1942d825-3f2c-4555-9212-4771283ad4cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.780910 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1942d825-3f2c-4555-9212-4771283ad4cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.781568 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.786214 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.786677 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.786750 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.787013 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.788025 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.788774 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.789218 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.789836 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.803206 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp"] Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932730 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932780 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932816 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932930 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932955 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932985 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlqll\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933031 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933056 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933086 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933123 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933168 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034453 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034622 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034645 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034723 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034757 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlqll\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034795 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034872 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034899 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.039653 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.040273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.040288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.041842 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.043178 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.043225 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.043623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.044444 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.046138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.046539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.046767 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.054063 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.062039 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlqll\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.102508 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.662322 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp"] Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.704311 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" event={"ID":"e57ad057-1847-4336-a884-ca693f4ee867","Type":"ContainerStarted","Data":"14be8c996c1ec23ea07c79be45d1f991c3a1166b515fcc206ec16d4493a8528d"} Jan 21 16:13:54 crc kubenswrapper[4739]: I0121 16:13:54.714705 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" event={"ID":"e57ad057-1847-4336-a884-ca693f4ee867","Type":"ContainerStarted","Data":"cee221b74bf9f397153abdc9a0dfed3d3602b1576d7e891f9045258c0b807c08"} Jan 21 16:13:54 crc kubenswrapper[4739]: I0121 16:13:54.738420 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" podStartSLOduration=2.29017967 podStartE2EDuration="2.738398978s" podCreationTimestamp="2026-01-21 16:13:52 +0000 UTC" firstStartedPulling="2026-01-21 16:13:53.674235874 +0000 UTC m=+2865.364942138" lastFinishedPulling="2026-01-21 16:13:54.122455182 +0000 UTC m=+2865.813161446" observedRunningTime="2026-01-21 16:13:54.737213905 +0000 UTC m=+2866.427920169" watchObservedRunningTime="2026-01-21 16:13:54.738398978 +0000 UTC m=+2866.429105242" Jan 21 16:14:26 crc kubenswrapper[4739]: I0121 16:14:26.976650 4739 generic.go:334] "Generic (PLEG): container finished" podID="e57ad057-1847-4336-a884-ca693f4ee867" containerID="cee221b74bf9f397153abdc9a0dfed3d3602b1576d7e891f9045258c0b807c08" exitCode=0 Jan 21 16:14:26 crc kubenswrapper[4739]: I0121 16:14:26.976734 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" event={"ID":"e57ad057-1847-4336-a884-ca693f4ee867","Type":"ContainerDied","Data":"cee221b74bf9f397153abdc9a0dfed3d3602b1576d7e891f9045258c0b807c08"} Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.505842 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573363 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573481 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573516 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573549 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573567 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573591 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlqll\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573614 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573680 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573707 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573727 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573749 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573794 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573859 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.578605 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph" (OuterVolumeSpecName: "ceph") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.578699 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.579777 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.581222 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.581674 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.582776 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.584333 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.584715 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.585256 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.585703 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.587975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll" (OuterVolumeSpecName: "kube-api-access-qlqll") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "kube-api-access-qlqll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.601135 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory" (OuterVolumeSpecName: "inventory") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.605199 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675623 4739 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675662 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675673 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675683 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675691 4739 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675700 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675709 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675718 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675728 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675736 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675744 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675752 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlqll\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675759 4739 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.992853 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" event={"ID":"e57ad057-1847-4336-a884-ca693f4ee867","Type":"ContainerDied","Data":"14be8c996c1ec23ea07c79be45d1f991c3a1166b515fcc206ec16d4493a8528d"} Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.992902 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14be8c996c1ec23ea07c79be45d1f991c3a1166b515fcc206ec16d4493a8528d" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.992997 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.090208 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6"] Jan 21 16:14:29 crc kubenswrapper[4739]: E0121 16:14:29.090684 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57ad057-1847-4336-a884-ca693f4ee867" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.090703 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57ad057-1847-4336-a884-ca693f4ee867" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.090985 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57ad057-1847-4336-a884-ca693f4ee867" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.091625 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.093939 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.095323 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.095515 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.096179 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.099150 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.101695 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6"] Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.182974 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.183052 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.183155 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.183181 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssztd\" (UniqueName: \"kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.284364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.284425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.284478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.284498 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssztd\" (UniqueName: \"kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.288597 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.288615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.295351 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.305174 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssztd\" (UniqueName: \"kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.406391 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.911740 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.914776 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.930027 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6"] Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.943474 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.000811 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsp5m\" (UniqueName: \"kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.000915 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.000960 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.002665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" event={"ID":"faa406e8-9005-4c42-a434-cc5d36dbf56c","Type":"ContainerStarted","Data":"ae86ab64b341814ec2897645d1a52f94905d2f59fe9abd166861776d48413aa2"} Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.103094 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.103162 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.103249 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsp5m\" (UniqueName: \"kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.104102 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.104310 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.137774 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsp5m\" (UniqueName: \"kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.277775 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: W0121 16:14:30.878561 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc59564c4_7106_4906_9cf7_ecddcc83fa7a.slice/crio-f4b430beeacdd0225a693151a6f27f4f0370dd694f7425b8c5caaa9635552ffa WatchSource:0}: Error finding container f4b430beeacdd0225a693151a6f27f4f0370dd694f7425b8c5caaa9635552ffa: Status 404 returned error can't find the container with id f4b430beeacdd0225a693151a6f27f4f0370dd694f7425b8c5caaa9635552ffa Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.885156 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:31 crc kubenswrapper[4739]: I0121 16:14:31.010953 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerStarted","Data":"f4b430beeacdd0225a693151a6f27f4f0370dd694f7425b8c5caaa9635552ffa"} Jan 21 16:14:31 crc kubenswrapper[4739]: I0121 16:14:31.012299 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" event={"ID":"faa406e8-9005-4c42-a434-cc5d36dbf56c","Type":"ContainerStarted","Data":"f5ca36ea32a31efd733b40c4fd6948a1e9df60aa0712109791d18003df98e10e"} Jan 21 16:14:32 crc kubenswrapper[4739]: I0121 16:14:32.021170 4739 generic.go:334] "Generic (PLEG): container finished" podID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerID="a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66" exitCode=0 Jan 21 16:14:32 crc kubenswrapper[4739]: I0121 16:14:32.021260 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerDied","Data":"a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66"} Jan 21 16:14:32 crc kubenswrapper[4739]: I0121 16:14:32.061874 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" podStartSLOduration=2.412838544 podStartE2EDuration="3.061848835s" podCreationTimestamp="2026-01-21 16:14:29 +0000 UTC" firstStartedPulling="2026-01-21 16:14:29.931926185 +0000 UTC m=+2901.622632449" lastFinishedPulling="2026-01-21 16:14:30.580936476 +0000 UTC m=+2902.271642740" observedRunningTime="2026-01-21 16:14:32.056384935 +0000 UTC m=+2903.747091209" watchObservedRunningTime="2026-01-21 16:14:32.061848835 +0000 UTC m=+2903.752555099" Jan 21 16:14:34 crc kubenswrapper[4739]: I0121 16:14:34.041627 4739 generic.go:334] "Generic (PLEG): container finished" podID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerID="810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed" exitCode=0 Jan 21 16:14:34 crc kubenswrapper[4739]: I0121 16:14:34.042141 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerDied","Data":"810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed"} Jan 21 16:14:36 crc kubenswrapper[4739]: I0121 16:14:36.061073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerStarted","Data":"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125"} Jan 21 16:14:36 crc kubenswrapper[4739]: I0121 16:14:36.092800 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8sdmf" podStartSLOduration=4.331894893 podStartE2EDuration="7.0927732s" podCreationTimestamp="2026-01-21 16:14:29 +0000 UTC" firstStartedPulling="2026-01-21 16:14:32.022847236 +0000 UTC m=+2903.713553500" lastFinishedPulling="2026-01-21 16:14:34.783725533 +0000 UTC m=+2906.474431807" observedRunningTime="2026-01-21 16:14:36.082404405 +0000 UTC m=+2907.773110709" watchObservedRunningTime="2026-01-21 16:14:36.0927732 +0000 UTC m=+2907.783479484" Jan 21 16:14:37 crc kubenswrapper[4739]: I0121 16:14:37.069798 4739 generic.go:334] "Generic (PLEG): container finished" podID="faa406e8-9005-4c42-a434-cc5d36dbf56c" containerID="f5ca36ea32a31efd733b40c4fd6948a1e9df60aa0712109791d18003df98e10e" exitCode=0 Jan 21 16:14:37 crc kubenswrapper[4739]: I0121 16:14:37.069877 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" event={"ID":"faa406e8-9005-4c42-a434-cc5d36dbf56c","Type":"ContainerDied","Data":"f5ca36ea32a31efd733b40c4fd6948a1e9df60aa0712109791d18003df98e10e"} Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.447343 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.458273 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam\") pod \"faa406e8-9005-4c42-a434-cc5d36dbf56c\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.458335 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssztd\" (UniqueName: \"kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd\") pod \"faa406e8-9005-4c42-a434-cc5d36dbf56c\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.458517 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph\") pod \"faa406e8-9005-4c42-a434-cc5d36dbf56c\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.458537 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory\") pod \"faa406e8-9005-4c42-a434-cc5d36dbf56c\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.466043 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph" (OuterVolumeSpecName: "ceph") pod "faa406e8-9005-4c42-a434-cc5d36dbf56c" (UID: "faa406e8-9005-4c42-a434-cc5d36dbf56c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.466479 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd" (OuterVolumeSpecName: "kube-api-access-ssztd") pod "faa406e8-9005-4c42-a434-cc5d36dbf56c" (UID: "faa406e8-9005-4c42-a434-cc5d36dbf56c"). InnerVolumeSpecName "kube-api-access-ssztd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.491957 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory" (OuterVolumeSpecName: "inventory") pod "faa406e8-9005-4c42-a434-cc5d36dbf56c" (UID: "faa406e8-9005-4c42-a434-cc5d36dbf56c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.496137 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "faa406e8-9005-4c42-a434-cc5d36dbf56c" (UID: "faa406e8-9005-4c42-a434-cc5d36dbf56c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.561066 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.561106 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.561121 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.561133 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssztd\" (UniqueName: \"kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.087002 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" event={"ID":"faa406e8-9005-4c42-a434-cc5d36dbf56c","Type":"ContainerDied","Data":"ae86ab64b341814ec2897645d1a52f94905d2f59fe9abd166861776d48413aa2"} Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.087383 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae86ab64b341814ec2897645d1a52f94905d2f59fe9abd166861776d48413aa2" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.087055 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.164731 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj"] Jan 21 16:14:39 crc kubenswrapper[4739]: E0121 16:14:39.165198 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa406e8-9005-4c42-a434-cc5d36dbf56c" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.165219 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa406e8-9005-4c42-a434-cc5d36dbf56c" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.165424 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa406e8-9005-4c42-a434-cc5d36dbf56c" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.166045 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.168257 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.168280 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.168297 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.168261 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.168865 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.169028 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173123 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173202 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173360 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg5q2\" (UniqueName: \"kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173438 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173644 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173864 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.183967 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj"] Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.274754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg5q2\" (UniqueName: \"kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.275117 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.275175 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.275252 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.275300 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.275327 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.276235 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.279404 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.279800 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.280231 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.280659 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.294966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg5q2\" (UniqueName: \"kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.523311 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:40 crc kubenswrapper[4739]: I0121 16:14:40.008538 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj"] Jan 21 16:14:40 crc kubenswrapper[4739]: I0121 16:14:40.098129 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" event={"ID":"bf8a2940-3bba-4811-a552-01919ddcdde1","Type":"ContainerStarted","Data":"2ce38de13fec327aeadb777c989028b337492e09634e48055deefa1245002105"} Jan 21 16:14:40 crc kubenswrapper[4739]: I0121 16:14:40.278645 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:40 crc kubenswrapper[4739]: I0121 16:14:40.279271 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:40 crc kubenswrapper[4739]: I0121 16:14:40.327616 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:41 crc kubenswrapper[4739]: I0121 16:14:41.167231 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:41 crc kubenswrapper[4739]: I0121 16:14:41.228740 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:42 crc kubenswrapper[4739]: I0121 16:14:42.122127 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" event={"ID":"bf8a2940-3bba-4811-a552-01919ddcdde1","Type":"ContainerStarted","Data":"5d7df38ba96612d373b38c7a586b2e7d2eec5f48feac448c4c2390070c89e6b8"} Jan 21 16:14:42 crc kubenswrapper[4739]: I0121 16:14:42.158305 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" podStartSLOduration=2.114156398 podStartE2EDuration="3.158289432s" podCreationTimestamp="2026-01-21 16:14:39 +0000 UTC" firstStartedPulling="2026-01-21 16:14:40.018256704 +0000 UTC m=+2911.708962968" lastFinishedPulling="2026-01-21 16:14:41.062389738 +0000 UTC m=+2912.753096002" observedRunningTime="2026-01-21 16:14:42.156983096 +0000 UTC m=+2913.847689360" watchObservedRunningTime="2026-01-21 16:14:42.158289432 +0000 UTC m=+2913.848995696" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.128498 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8sdmf" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="registry-server" containerID="cri-o://ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125" gracePeriod=2 Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.750740 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.865320 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsp5m\" (UniqueName: \"kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m\") pod \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.865758 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content\") pod \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.866083 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities\") pod \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.869023 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities" (OuterVolumeSpecName: "utilities") pod "c59564c4-7106-4906-9cf7-ecddcc83fa7a" (UID: "c59564c4-7106-4906-9cf7-ecddcc83fa7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.872512 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m" (OuterVolumeSpecName: "kube-api-access-vsp5m") pod "c59564c4-7106-4906-9cf7-ecddcc83fa7a" (UID: "c59564c4-7106-4906-9cf7-ecddcc83fa7a"). InnerVolumeSpecName "kube-api-access-vsp5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.969434 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsp5m\" (UniqueName: \"kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.969467 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.996433 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c59564c4-7106-4906-9cf7-ecddcc83fa7a" (UID: "c59564c4-7106-4906-9cf7-ecddcc83fa7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.071311 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.139342 4739 generic.go:334] "Generic (PLEG): container finished" podID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerID="ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125" exitCode=0 Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.139404 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerDied","Data":"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125"} Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.139436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerDied","Data":"f4b430beeacdd0225a693151a6f27f4f0370dd694f7425b8c5caaa9635552ffa"} Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.139453 4739 scope.go:117] "RemoveContainer" containerID="ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.139607 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.165045 4739 scope.go:117] "RemoveContainer" containerID="810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.172722 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.181463 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.189299 4739 scope.go:117] "RemoveContainer" containerID="a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.227100 4739 scope.go:117] "RemoveContainer" containerID="ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125" Jan 21 16:14:44 crc kubenswrapper[4739]: E0121 16:14:44.227751 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125\": container with ID starting with ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125 not found: ID does not exist" containerID="ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.227933 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125"} err="failed to get container status \"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125\": rpc error: code = NotFound desc = could not find container \"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125\": container with ID starting with ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125 not found: ID does not exist" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.228044 4739 scope.go:117] "RemoveContainer" containerID="810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed" Jan 21 16:14:44 crc kubenswrapper[4739]: E0121 16:14:44.228440 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed\": container with ID starting with 810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed not found: ID does not exist" containerID="810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.228479 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed"} err="failed to get container status \"810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed\": rpc error: code = NotFound desc = could not find container \"810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed\": container with ID starting with 810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed not found: ID does not exist" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.228513 4739 scope.go:117] "RemoveContainer" containerID="a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66" Jan 21 16:14:44 crc kubenswrapper[4739]: E0121 16:14:44.229130 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66\": container with ID starting with a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66 not found: ID does not exist" containerID="a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.229162 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66"} err="failed to get container status \"a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66\": rpc error: code = NotFound desc = could not find container \"a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66\": container with ID starting with a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66 not found: ID does not exist" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.793796 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" path="/var/lib/kubelet/pods/c59564c4-7106-4906-9cf7-ecddcc83fa7a/volumes" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.151139 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5"] Jan 21 16:15:00 crc kubenswrapper[4739]: E0121 16:15:00.152061 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.152076 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4739]: E0121 16:15:00.152091 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.152097 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4739]: E0121 16:15:00.152108 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.152115 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.152316 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.152895 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.155913 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.156642 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.166834 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5"] Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.173492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4p9n\" (UniqueName: \"kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.173558 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.173583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.275957 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4p9n\" (UniqueName: \"kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.276027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.276048 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.279689 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.293475 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.293605 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4p9n\" (UniqueName: \"kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.495606 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.962560 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5"] Jan 21 16:15:01 crc kubenswrapper[4739]: I0121 16:15:01.280064 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" event={"ID":"500844a7-398c-49ff-ab43-ee0502f1c576","Type":"ContainerStarted","Data":"9e8058f7eec039e4c3259b5efc1ab1e60d67bb50c456dee5d157611618a29b3d"} Jan 21 16:15:01 crc kubenswrapper[4739]: I0121 16:15:01.280384 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" event={"ID":"500844a7-398c-49ff-ab43-ee0502f1c576","Type":"ContainerStarted","Data":"079afabb4c9362b551a90322285dd036ecd823f41333d1f7dc8917c230464369"} Jan 21 16:15:01 crc kubenswrapper[4739]: I0121 16:15:01.296510 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" podStartSLOduration=1.296496289 podStartE2EDuration="1.296496289s" podCreationTimestamp="2026-01-21 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:15:01.292625264 +0000 UTC m=+2932.983331518" watchObservedRunningTime="2026-01-21 16:15:01.296496289 +0000 UTC m=+2932.987202553" Jan 21 16:15:02 crc kubenswrapper[4739]: I0121 16:15:02.288229 4739 generic.go:334] "Generic (PLEG): container finished" podID="500844a7-398c-49ff-ab43-ee0502f1c576" containerID="9e8058f7eec039e4c3259b5efc1ab1e60d67bb50c456dee5d157611618a29b3d" exitCode=0 Jan 21 16:15:02 crc kubenswrapper[4739]: I0121 16:15:02.288272 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" event={"ID":"500844a7-398c-49ff-ab43-ee0502f1c576","Type":"ContainerDied","Data":"9e8058f7eec039e4c3259b5efc1ab1e60d67bb50c456dee5d157611618a29b3d"} Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.607556 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.640559 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume\") pod \"500844a7-398c-49ff-ab43-ee0502f1c576\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.641089 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume\") pod \"500844a7-398c-49ff-ab43-ee0502f1c576\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.641251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4p9n\" (UniqueName: \"kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n\") pod \"500844a7-398c-49ff-ab43-ee0502f1c576\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.641810 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume" (OuterVolumeSpecName: "config-volume") pod "500844a7-398c-49ff-ab43-ee0502f1c576" (UID: "500844a7-398c-49ff-ab43-ee0502f1c576"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.648934 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "500844a7-398c-49ff-ab43-ee0502f1c576" (UID: "500844a7-398c-49ff-ab43-ee0502f1c576"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.653994 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n" (OuterVolumeSpecName: "kube-api-access-d4p9n") pod "500844a7-398c-49ff-ab43-ee0502f1c576" (UID: "500844a7-398c-49ff-ab43-ee0502f1c576"). InnerVolumeSpecName "kube-api-access-d4p9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.743064 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.743097 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4p9n\" (UniqueName: \"kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.743109 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.304001 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" event={"ID":"500844a7-398c-49ff-ab43-ee0502f1c576","Type":"ContainerDied","Data":"079afabb4c9362b551a90322285dd036ecd823f41333d1f7dc8917c230464369"} Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.304038 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="079afabb4c9362b551a90322285dd036ecd823f41333d1f7dc8917c230464369" Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.304518 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.376523 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd"] Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.384373 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd"] Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.795504 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f378ddb-72bf-4542-bec3-ce2652d0ab02" path="/var/lib/kubelet/pods/3f378ddb-72bf-4542-bec3-ce2652d0ab02/volumes" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.717403 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:05 crc kubenswrapper[4739]: E0121 16:15:05.718514 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500844a7-398c-49ff-ab43-ee0502f1c576" containerName="collect-profiles" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.718535 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="500844a7-398c-49ff-ab43-ee0502f1c576" containerName="collect-profiles" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.718960 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="500844a7-398c-49ff-ab43-ee0502f1c576" containerName="collect-profiles" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.720389 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.730057 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.781319 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.781370 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.781410 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mvjq\" (UniqueName: \"kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.883771 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.883840 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.883884 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mvjq\" (UniqueName: \"kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.886437 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.886477 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.909539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mvjq\" (UniqueName: \"kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:06 crc kubenswrapper[4739]: I0121 16:15:06.039640 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:06 crc kubenswrapper[4739]: I0121 16:15:06.652526 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:07 crc kubenswrapper[4739]: I0121 16:15:07.329542 4739 generic.go:334] "Generic (PLEG): container finished" podID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerID="6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386" exitCode=0 Jan 21 16:15:07 crc kubenswrapper[4739]: I0121 16:15:07.329588 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerDied","Data":"6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386"} Jan 21 16:15:07 crc kubenswrapper[4739]: I0121 16:15:07.329615 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerStarted","Data":"8ee62d880d328031b0b91358c614757292ce91ff9fdf5ceadb716c0b499b9e0a"} Jan 21 16:15:07 crc kubenswrapper[4739]: I0121 16:15:07.332137 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:15:08 crc kubenswrapper[4739]: I0121 16:15:08.340950 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerStarted","Data":"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b"} Jan 21 16:15:10 crc kubenswrapper[4739]: I0121 16:15:10.357875 4739 generic.go:334] "Generic (PLEG): container finished" podID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerID="b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b" exitCode=0 Jan 21 16:15:10 crc kubenswrapper[4739]: I0121 16:15:10.357938 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerDied","Data":"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b"} Jan 21 16:15:11 crc kubenswrapper[4739]: I0121 16:15:11.371045 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerStarted","Data":"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f"} Jan 21 16:15:11 crc kubenswrapper[4739]: I0121 16:15:11.394953 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vtsh5" podStartSLOduration=2.851200547 podStartE2EDuration="6.394934307s" podCreationTimestamp="2026-01-21 16:15:05 +0000 UTC" firstStartedPulling="2026-01-21 16:15:07.331942743 +0000 UTC m=+2939.022649007" lastFinishedPulling="2026-01-21 16:15:10.875676513 +0000 UTC m=+2942.566382767" observedRunningTime="2026-01-21 16:15:11.392183672 +0000 UTC m=+2943.082889956" watchObservedRunningTime="2026-01-21 16:15:11.394934307 +0000 UTC m=+2943.085640571" Jan 21 16:15:16 crc kubenswrapper[4739]: I0121 16:15:16.040485 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:16 crc kubenswrapper[4739]: I0121 16:15:16.040826 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:16 crc kubenswrapper[4739]: I0121 16:15:16.095999 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:16 crc kubenswrapper[4739]: I0121 16:15:16.450904 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:16 crc kubenswrapper[4739]: I0121 16:15:16.499042 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:18 crc kubenswrapper[4739]: I0121 16:15:18.419062 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vtsh5" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="registry-server" containerID="cri-o://ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f" gracePeriod=2 Jan 21 16:15:18 crc kubenswrapper[4739]: I0121 16:15:18.874123 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.046440 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content\") pod \"1773672f-0a93-4ffa-92ff-e7d851953c13\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.046589 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities\") pod \"1773672f-0a93-4ffa-92ff-e7d851953c13\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.046658 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mvjq\" (UniqueName: \"kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq\") pod \"1773672f-0a93-4ffa-92ff-e7d851953c13\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.047877 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities" (OuterVolumeSpecName: "utilities") pod "1773672f-0a93-4ffa-92ff-e7d851953c13" (UID: "1773672f-0a93-4ffa-92ff-e7d851953c13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.054181 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq" (OuterVolumeSpecName: "kube-api-access-2mvjq") pod "1773672f-0a93-4ffa-92ff-e7d851953c13" (UID: "1773672f-0a93-4ffa-92ff-e7d851953c13"). InnerVolumeSpecName "kube-api-access-2mvjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.092305 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1773672f-0a93-4ffa-92ff-e7d851953c13" (UID: "1773672f-0a93-4ffa-92ff-e7d851953c13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.149632 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mvjq\" (UniqueName: \"kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.149997 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.150017 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.432851 4739 generic.go:334] "Generic (PLEG): container finished" podID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerID="ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f" exitCode=0 Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.432894 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerDied","Data":"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f"} Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.432920 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerDied","Data":"8ee62d880d328031b0b91358c614757292ce91ff9fdf5ceadb716c0b499b9e0a"} Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.432919 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.432997 4739 scope.go:117] "RemoveContainer" containerID="ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.468089 4739 scope.go:117] "RemoveContainer" containerID="b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.470849 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.505979 4739 scope.go:117] "RemoveContainer" containerID="6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.509711 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.540391 4739 scope.go:117] "RemoveContainer" containerID="ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f" Jan 21 16:15:19 crc kubenswrapper[4739]: E0121 16:15:19.546346 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f\": container with ID starting with ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f not found: ID does not exist" containerID="ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.546561 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f"} err="failed to get container status \"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f\": rpc error: code = NotFound desc = could not find container \"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f\": container with ID starting with ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f not found: ID does not exist" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.546664 4739 scope.go:117] "RemoveContainer" containerID="b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b" Jan 21 16:15:19 crc kubenswrapper[4739]: E0121 16:15:19.548191 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b\": container with ID starting with b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b not found: ID does not exist" containerID="b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.548284 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b"} err="failed to get container status \"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b\": rpc error: code = NotFound desc = could not find container \"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b\": container with ID starting with b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b not found: ID does not exist" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.548385 4739 scope.go:117] "RemoveContainer" containerID="6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386" Jan 21 16:15:19 crc kubenswrapper[4739]: E0121 16:15:19.548679 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386\": container with ID starting with 6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386 not found: ID does not exist" containerID="6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.548774 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386"} err="failed to get container status \"6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386\": rpc error: code = NotFound desc = could not find container \"6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386\": container with ID starting with 6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386 not found: ID does not exist" Jan 21 16:15:20 crc kubenswrapper[4739]: I0121 16:15:20.803469 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" path="/var/lib/kubelet/pods/1773672f-0a93-4ffa-92ff-e7d851953c13/volumes" Jan 21 16:15:35 crc kubenswrapper[4739]: I0121 16:15:35.222897 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:15:35 crc kubenswrapper[4739]: I0121 16:15:35.223448 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:15:50 crc kubenswrapper[4739]: I0121 16:15:50.119641 4739 scope.go:117] "RemoveContainer" containerID="d15b945816d6b79eb9e01377f4a26669eb533bef1836689547fca7a0b232814d" Jan 21 16:16:05 crc kubenswrapper[4739]: I0121 16:16:05.222484 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:16:05 crc kubenswrapper[4739]: I0121 16:16:05.223035 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:16:06 crc kubenswrapper[4739]: I0121 16:16:06.798014 4739 generic.go:334] "Generic (PLEG): container finished" podID="bf8a2940-3bba-4811-a552-01919ddcdde1" containerID="5d7df38ba96612d373b38c7a586b2e7d2eec5f48feac448c4c2390070c89e6b8" exitCode=0 Jan 21 16:16:06 crc kubenswrapper[4739]: I0121 16:16:06.798119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" event={"ID":"bf8a2940-3bba-4811-a552-01919ddcdde1","Type":"ContainerDied","Data":"5d7df38ba96612d373b38c7a586b2e7d2eec5f48feac448c4c2390070c89e6b8"} Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.244771 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.420118 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.420443 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.420611 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.420727 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.420928 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.421024 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5q2\" (UniqueName: \"kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.430968 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2" (OuterVolumeSpecName: "kube-api-access-qg5q2") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "kube-api-access-qg5q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.455043 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.457666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph" (OuterVolumeSpecName: "ceph") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.485385 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.506355 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.517972 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory" (OuterVolumeSpecName: "inventory") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.522558 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.522823 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5q2\" (UniqueName: \"kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.522912 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.522991 4739 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.523073 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.523168 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.818002 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" event={"ID":"bf8a2940-3bba-4811-a552-01919ddcdde1","Type":"ContainerDied","Data":"2ce38de13fec327aeadb777c989028b337492e09634e48055deefa1245002105"} Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.818040 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ce38de13fec327aeadb777c989028b337492e09634e48055deefa1245002105" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.818098 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.915264 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6"] Jan 21 16:16:08 crc kubenswrapper[4739]: E0121 16:16:08.921041 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf8a2940-3bba-4811-a552-01919ddcdde1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921075 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf8a2940-3bba-4811-a552-01919ddcdde1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:16:08 crc kubenswrapper[4739]: E0121 16:16:08.921085 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="extract-utilities" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921091 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="extract-utilities" Jan 21 16:16:08 crc kubenswrapper[4739]: E0121 16:16:08.921127 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="registry-server" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921133 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="registry-server" Jan 21 16:16:08 crc kubenswrapper[4739]: E0121 16:16:08.921143 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="extract-content" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921149 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="extract-content" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921371 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="registry-server" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921383 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf8a2940-3bba-4811-a552-01919ddcdde1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.922007 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.928295 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.928499 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.928534 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6"] Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.928667 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.928994 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.929117 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.929187 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.929231 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwnk\" (UniqueName: \"kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032536 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032554 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032607 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032652 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032700 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032716 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134036 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134103 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwnk\" (UniqueName: \"kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134132 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134213 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134323 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134349 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.139273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.139805 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.140040 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.140731 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.142245 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.144324 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.153581 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwnk\" (UniqueName: \"kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.241893 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.780188 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6"] Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.826671 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" event={"ID":"0a2c5efb-5467-4985-8526-56adf203eef0","Type":"ContainerStarted","Data":"de592596025226530a9963d428367aaa8cb98decc56f937132a4205753c821c0"} Jan 21 16:16:11 crc kubenswrapper[4739]: I0121 16:16:11.847880 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" event={"ID":"0a2c5efb-5467-4985-8526-56adf203eef0","Type":"ContainerStarted","Data":"2e6c653c45a3b378389a9558654d8498736d5dc0423eb4713da9fd44a3c3111b"} Jan 21 16:16:11 crc kubenswrapper[4739]: I0121 16:16:11.867633 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" podStartSLOduration=2.95188755 podStartE2EDuration="3.867608686s" podCreationTimestamp="2026-01-21 16:16:08 +0000 UTC" firstStartedPulling="2026-01-21 16:16:09.779230431 +0000 UTC m=+3001.469936695" lastFinishedPulling="2026-01-21 16:16:10.694951567 +0000 UTC m=+3002.385657831" observedRunningTime="2026-01-21 16:16:11.86444561 +0000 UTC m=+3003.555151884" watchObservedRunningTime="2026-01-21 16:16:11.867608686 +0000 UTC m=+3003.558314950" Jan 21 16:16:35 crc kubenswrapper[4739]: I0121 16:16:35.222766 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:16:35 crc kubenswrapper[4739]: I0121 16:16:35.223251 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:16:35 crc kubenswrapper[4739]: I0121 16:16:35.223293 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:16:35 crc kubenswrapper[4739]: I0121 16:16:35.223962 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:16:35 crc kubenswrapper[4739]: I0121 16:16:35.224005 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" gracePeriod=600 Jan 21 16:16:35 crc kubenswrapper[4739]: E0121 16:16:35.341245 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:16:36 crc kubenswrapper[4739]: I0121 16:16:36.037778 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2"} Jan 21 16:16:36 crc kubenswrapper[4739]: I0121 16:16:36.037881 4739 scope.go:117] "RemoveContainer" containerID="9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9" Jan 21 16:16:36 crc kubenswrapper[4739]: I0121 16:16:36.037774 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" exitCode=0 Jan 21 16:16:36 crc kubenswrapper[4739]: I0121 16:16:36.038498 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:16:36 crc kubenswrapper[4739]: E0121 16:16:36.038731 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:16:47 crc kubenswrapper[4739]: I0121 16:16:47.783168 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:16:47 crc kubenswrapper[4739]: E0121 16:16:47.784192 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:16:58 crc kubenswrapper[4739]: I0121 16:16:58.790289 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:16:58 crc kubenswrapper[4739]: E0121 16:16:58.791016 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:17:13 crc kubenswrapper[4739]: I0121 16:17:13.783196 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:17:13 crc kubenswrapper[4739]: E0121 16:17:13.784076 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:17:26 crc kubenswrapper[4739]: I0121 16:17:26.432233 4739 generic.go:334] "Generic (PLEG): container finished" podID="0a2c5efb-5467-4985-8526-56adf203eef0" containerID="2e6c653c45a3b378389a9558654d8498736d5dc0423eb4713da9fd44a3c3111b" exitCode=0 Jan 21 16:17:26 crc kubenswrapper[4739]: I0121 16:17:26.432308 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" event={"ID":"0a2c5efb-5467-4985-8526-56adf203eef0","Type":"ContainerDied","Data":"2e6c653c45a3b378389a9558654d8498736d5dc0423eb4713da9fd44a3c3111b"} Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.782966 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:17:27 crc kubenswrapper[4739]: E0121 16:17:27.783883 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.822766 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952555 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952627 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952768 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952889 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952934 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952971 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwnk\" (UniqueName: \"kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.967807 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph" (OuterVolumeSpecName: "ceph") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.969123 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk" (OuterVolumeSpecName: "kube-api-access-gfwnk") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "kube-api-access-gfwnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.969912 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.978631 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.979177 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.979505 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory" (OuterVolumeSpecName: "inventory") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.993589 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054702 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054737 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054752 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054764 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfwnk\" (UniqueName: \"kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054775 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054785 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054800 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.449189 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" event={"ID":"0a2c5efb-5467-4985-8526-56adf203eef0","Type":"ContainerDied","Data":"de592596025226530a9963d428367aaa8cb98decc56f937132a4205753c821c0"} Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.449236 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de592596025226530a9963d428367aaa8cb98decc56f937132a4205753c821c0" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.449311 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.567809 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9"] Jan 21 16:17:28 crc kubenswrapper[4739]: E0121 16:17:28.568400 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2c5efb-5467-4985-8526-56adf203eef0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.568417 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2c5efb-5467-4985-8526-56adf203eef0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.568587 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2c5efb-5467-4985-8526-56adf203eef0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.569126 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.571737 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.571754 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.571747 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.571860 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.572250 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.572464 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.585745 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9"] Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.665418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.665710 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmd5l\" (UniqueName: \"kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.665864 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.665998 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.666122 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.666311 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.767788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.768299 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmd5l\" (UniqueName: \"kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.768426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.768520 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.768605 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.768707 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.773421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.774561 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.780626 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.781135 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.781569 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.796058 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmd5l\" (UniqueName: \"kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.887539 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:29 crc kubenswrapper[4739]: I0121 16:17:29.491155 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9"] Jan 21 16:17:30 crc kubenswrapper[4739]: I0121 16:17:30.465913 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" event={"ID":"254da8b1-762d-4c96-a7e1-fe39f6988eac","Type":"ContainerStarted","Data":"d3773ce03ec5daaa4d931e2989330efa7a78952868f18ac76d5b731ef2adea45"} Jan 21 16:17:30 crc kubenswrapper[4739]: I0121 16:17:30.466506 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" event={"ID":"254da8b1-762d-4c96-a7e1-fe39f6988eac","Type":"ContainerStarted","Data":"6460871f3d3a86b66538c305b740d159eb5f973678a07ed3619aca1d196126f8"} Jan 21 16:17:30 crc kubenswrapper[4739]: I0121 16:17:30.500095 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" podStartSLOduration=2.109670406 podStartE2EDuration="2.500077404s" podCreationTimestamp="2026-01-21 16:17:28 +0000 UTC" firstStartedPulling="2026-01-21 16:17:29.498344685 +0000 UTC m=+3081.189050949" lastFinishedPulling="2026-01-21 16:17:29.888751693 +0000 UTC m=+3081.579457947" observedRunningTime="2026-01-21 16:17:30.484357436 +0000 UTC m=+3082.175063700" watchObservedRunningTime="2026-01-21 16:17:30.500077404 +0000 UTC m=+3082.190783668" Jan 21 16:17:41 crc kubenswrapper[4739]: I0121 16:17:41.783158 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:17:41 crc kubenswrapper[4739]: E0121 16:17:41.784157 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:17:55 crc kubenswrapper[4739]: I0121 16:17:55.783123 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:17:55 crc kubenswrapper[4739]: E0121 16:17:55.783920 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:18:07 crc kubenswrapper[4739]: I0121 16:18:07.782734 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:18:07 crc kubenswrapper[4739]: E0121 16:18:07.783431 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:18:21 crc kubenswrapper[4739]: I0121 16:18:21.783056 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:18:21 crc kubenswrapper[4739]: E0121 16:18:21.783747 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:18:33 crc kubenswrapper[4739]: I0121 16:18:33.783126 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:18:33 crc kubenswrapper[4739]: E0121 16:18:33.783950 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:18:47 crc kubenswrapper[4739]: I0121 16:18:47.782925 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:18:47 crc kubenswrapper[4739]: E0121 16:18:47.783711 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:19:02 crc kubenswrapper[4739]: I0121 16:19:02.782708 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:19:02 crc kubenswrapper[4739]: E0121 16:19:02.783462 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:19:16 crc kubenswrapper[4739]: I0121 16:19:16.783123 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:19:16 crc kubenswrapper[4739]: E0121 16:19:16.783894 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:19:30 crc kubenswrapper[4739]: I0121 16:19:30.783734 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:19:30 crc kubenswrapper[4739]: E0121 16:19:30.784572 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:19:45 crc kubenswrapper[4739]: I0121 16:19:45.783000 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:19:45 crc kubenswrapper[4739]: E0121 16:19:45.783748 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:20:00 crc kubenswrapper[4739]: I0121 16:20:00.783540 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:20:00 crc kubenswrapper[4739]: E0121 16:20:00.784527 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.026881 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.030777 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.044112 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.142327 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.142729 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.142936 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n85lw\" (UniqueName: \"kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.245156 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.245253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n85lw\" (UniqueName: \"kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.245296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.245758 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.245919 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.267715 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n85lw\" (UniqueName: \"kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.351466 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.842280 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:12 crc kubenswrapper[4739]: I0121 16:20:12.771696 4739 generic.go:334] "Generic (PLEG): container finished" podID="515f8b16-a411-4263-8099-e6cba1af79be" containerID="690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e" exitCode=0 Jan 21 16:20:12 crc kubenswrapper[4739]: I0121 16:20:12.771742 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerDied","Data":"690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e"} Jan 21 16:20:12 crc kubenswrapper[4739]: I0121 16:20:12.771772 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerStarted","Data":"272c669c635dd378a2ba39c41f39a0dcdf5fe19eded5b4c00569ef5ed37aa652"} Jan 21 16:20:12 crc kubenswrapper[4739]: I0121 16:20:12.774615 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:20:14 crc kubenswrapper[4739]: I0121 16:20:14.807593 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerStarted","Data":"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6"} Jan 21 16:20:15 crc kubenswrapper[4739]: I0121 16:20:15.782435 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:20:15 crc kubenswrapper[4739]: E0121 16:20:15.782934 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:20:17 crc kubenswrapper[4739]: I0121 16:20:17.826436 4739 generic.go:334] "Generic (PLEG): container finished" podID="515f8b16-a411-4263-8099-e6cba1af79be" containerID="04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6" exitCode=0 Jan 21 16:20:17 crc kubenswrapper[4739]: I0121 16:20:17.826534 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerDied","Data":"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6"} Jan 21 16:20:19 crc kubenswrapper[4739]: I0121 16:20:19.846220 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerStarted","Data":"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1"} Jan 21 16:20:21 crc kubenswrapper[4739]: I0121 16:20:21.352136 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:21 crc kubenswrapper[4739]: I0121 16:20:21.352485 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:22 crc kubenswrapper[4739]: I0121 16:20:22.396423 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tb9w4" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="registry-server" probeResult="failure" output=< Jan 21 16:20:22 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:20:22 crc kubenswrapper[4739]: > Jan 21 16:20:28 crc kubenswrapper[4739]: I0121 16:20:28.790289 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:20:28 crc kubenswrapper[4739]: E0121 16:20:28.791048 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:20:31 crc kubenswrapper[4739]: I0121 16:20:31.402907 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:31 crc kubenswrapper[4739]: I0121 16:20:31.423909 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tb9w4" podStartSLOduration=13.874299702 podStartE2EDuration="20.423889261s" podCreationTimestamp="2026-01-21 16:20:11 +0000 UTC" firstStartedPulling="2026-01-21 16:20:12.774318864 +0000 UTC m=+3244.465025128" lastFinishedPulling="2026-01-21 16:20:19.323908433 +0000 UTC m=+3251.014614687" observedRunningTime="2026-01-21 16:20:19.864757204 +0000 UTC m=+3251.555463478" watchObservedRunningTime="2026-01-21 16:20:31.423889261 +0000 UTC m=+3263.114595515" Jan 21 16:20:31 crc kubenswrapper[4739]: I0121 16:20:31.466198 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:31 crc kubenswrapper[4739]: I0121 16:20:31.641562 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:32 crc kubenswrapper[4739]: I0121 16:20:32.941802 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tb9w4" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="registry-server" containerID="cri-o://ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1" gracePeriod=2 Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.411583 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.603061 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content\") pod \"515f8b16-a411-4263-8099-e6cba1af79be\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.603408 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n85lw\" (UniqueName: \"kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw\") pod \"515f8b16-a411-4263-8099-e6cba1af79be\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.603453 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities\") pod \"515f8b16-a411-4263-8099-e6cba1af79be\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.604982 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities" (OuterVolumeSpecName: "utilities") pod "515f8b16-a411-4263-8099-e6cba1af79be" (UID: "515f8b16-a411-4263-8099-e6cba1af79be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.610660 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw" (OuterVolumeSpecName: "kube-api-access-n85lw") pod "515f8b16-a411-4263-8099-e6cba1af79be" (UID: "515f8b16-a411-4263-8099-e6cba1af79be"). InnerVolumeSpecName "kube-api-access-n85lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.705709 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n85lw\" (UniqueName: \"kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.705746 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.729018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "515f8b16-a411-4263-8099-e6cba1af79be" (UID: "515f8b16-a411-4263-8099-e6cba1af79be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.807690 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.951021 4739 generic.go:334] "Generic (PLEG): container finished" podID="515f8b16-a411-4263-8099-e6cba1af79be" containerID="ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1" exitCode=0 Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.951066 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerDied","Data":"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1"} Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.951097 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerDied","Data":"272c669c635dd378a2ba39c41f39a0dcdf5fe19eded5b4c00569ef5ed37aa652"} Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.951116 4739 scope.go:117] "RemoveContainer" containerID="ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.951123 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.972743 4739 scope.go:117] "RemoveContainer" containerID="04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.991885 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.005608 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.007521 4739 scope.go:117] "RemoveContainer" containerID="690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.039290 4739 scope.go:117] "RemoveContainer" containerID="ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1" Jan 21 16:20:34 crc kubenswrapper[4739]: E0121 16:20:34.039646 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1\": container with ID starting with ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1 not found: ID does not exist" containerID="ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.039692 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1"} err="failed to get container status \"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1\": rpc error: code = NotFound desc = could not find container \"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1\": container with ID starting with ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1 not found: ID does not exist" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.039716 4739 scope.go:117] "RemoveContainer" containerID="04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6" Jan 21 16:20:34 crc kubenswrapper[4739]: E0121 16:20:34.040108 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6\": container with ID starting with 04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6 not found: ID does not exist" containerID="04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.040172 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6"} err="failed to get container status \"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6\": rpc error: code = NotFound desc = could not find container \"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6\": container with ID starting with 04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6 not found: ID does not exist" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.040192 4739 scope.go:117] "RemoveContainer" containerID="690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e" Jan 21 16:20:34 crc kubenswrapper[4739]: E0121 16:20:34.040454 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e\": container with ID starting with 690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e not found: ID does not exist" containerID="690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.040477 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e"} err="failed to get container status \"690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e\": rpc error: code = NotFound desc = could not find container \"690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e\": container with ID starting with 690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e not found: ID does not exist" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.793153 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515f8b16-a411-4263-8099-e6cba1af79be" path="/var/lib/kubelet/pods/515f8b16-a411-4263-8099-e6cba1af79be/volumes" Jan 21 16:20:40 crc kubenswrapper[4739]: I0121 16:20:40.782958 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:20:40 crc kubenswrapper[4739]: E0121 16:20:40.783634 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:20:51 crc kubenswrapper[4739]: I0121 16:20:51.782793 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:20:51 crc kubenswrapper[4739]: E0121 16:20:51.783420 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:21:04 crc kubenswrapper[4739]: I0121 16:21:04.783701 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:21:04 crc kubenswrapper[4739]: E0121 16:21:04.785100 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:21:16 crc kubenswrapper[4739]: I0121 16:21:16.783116 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:21:16 crc kubenswrapper[4739]: E0121 16:21:16.783937 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:21:31 crc kubenswrapper[4739]: I0121 16:21:31.783508 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:21:31 crc kubenswrapper[4739]: E0121 16:21:31.784985 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:21:44 crc kubenswrapper[4739]: I0121 16:21:44.782995 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:21:45 crc kubenswrapper[4739]: I0121 16:21:45.556986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62"} Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.042894 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:00 crc kubenswrapper[4739]: E0121 16:22:00.043982 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="extract-utilities" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.044000 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="extract-utilities" Jan 21 16:22:00 crc kubenswrapper[4739]: E0121 16:22:00.044018 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="registry-server" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.044025 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="registry-server" Jan 21 16:22:00 crc kubenswrapper[4739]: E0121 16:22:00.044050 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="extract-content" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.044059 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="extract-content" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.044265 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="registry-server" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.046912 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.059765 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.187142 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mr9s\" (UniqueName: \"kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.187348 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.187404 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.289304 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.289599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.289783 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mr9s\" (UniqueName: \"kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.289887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.290022 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.308865 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mr9s\" (UniqueName: \"kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.377499 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.946263 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:00 crc kubenswrapper[4739]: W0121 16:22:00.951099 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19fc3161_9e69_4168_8da0_1eb3267a21b0.slice/crio-ddbb793bc23e659ba2c3890b29e628230e6e4684cf0021cbd416d4b129b07ac0 WatchSource:0}: Error finding container ddbb793bc23e659ba2c3890b29e628230e6e4684cf0021cbd416d4b129b07ac0: Status 404 returned error can't find the container with id ddbb793bc23e659ba2c3890b29e628230e6e4684cf0021cbd416d4b129b07ac0 Jan 21 16:22:01 crc kubenswrapper[4739]: I0121 16:22:01.690753 4739 generic.go:334] "Generic (PLEG): container finished" podID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerID="067705aca2821bb06d43edf54929abdaf6620a8087c0c18bea90a2ac507ccb1b" exitCode=0 Jan 21 16:22:01 crc kubenswrapper[4739]: I0121 16:22:01.690966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerDied","Data":"067705aca2821bb06d43edf54929abdaf6620a8087c0c18bea90a2ac507ccb1b"} Jan 21 16:22:01 crc kubenswrapper[4739]: I0121 16:22:01.691139 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerStarted","Data":"ddbb793bc23e659ba2c3890b29e628230e6e4684cf0021cbd416d4b129b07ac0"} Jan 21 16:22:02 crc kubenswrapper[4739]: I0121 16:22:02.700607 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerStarted","Data":"95cc3dd68878aba81871e7b3c26d4c214d01a54d0040f3bfcdfc6918934f4b05"} Jan 21 16:22:03 crc kubenswrapper[4739]: I0121 16:22:03.710418 4739 generic.go:334] "Generic (PLEG): container finished" podID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerID="95cc3dd68878aba81871e7b3c26d4c214d01a54d0040f3bfcdfc6918934f4b05" exitCode=0 Jan 21 16:22:03 crc kubenswrapper[4739]: I0121 16:22:03.710485 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerDied","Data":"95cc3dd68878aba81871e7b3c26d4c214d01a54d0040f3bfcdfc6918934f4b05"} Jan 21 16:22:04 crc kubenswrapper[4739]: I0121 16:22:04.720897 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerStarted","Data":"e4ca844616dc0c2e1dae88958170714a307231dfa2e365415a9008231bae6c46"} Jan 21 16:22:04 crc kubenswrapper[4739]: I0121 16:22:04.740073 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n4njk" podStartSLOduration=2.216464531 podStartE2EDuration="4.740054131s" podCreationTimestamp="2026-01-21 16:22:00 +0000 UTC" firstStartedPulling="2026-01-21 16:22:01.692559644 +0000 UTC m=+3353.383265908" lastFinishedPulling="2026-01-21 16:22:04.216149244 +0000 UTC m=+3355.906855508" observedRunningTime="2026-01-21 16:22:04.739188297 +0000 UTC m=+3356.429894561" watchObservedRunningTime="2026-01-21 16:22:04.740054131 +0000 UTC m=+3356.430760405" Jan 21 16:22:10 crc kubenswrapper[4739]: I0121 16:22:10.378287 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:10 crc kubenswrapper[4739]: I0121 16:22:10.379031 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:10 crc kubenswrapper[4739]: I0121 16:22:10.429561 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:10 crc kubenswrapper[4739]: I0121 16:22:10.814526 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:10 crc kubenswrapper[4739]: I0121 16:22:10.865062 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:12 crc kubenswrapper[4739]: I0121 16:22:12.785134 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n4njk" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="registry-server" containerID="cri-o://e4ca844616dc0c2e1dae88958170714a307231dfa2e365415a9008231bae6c46" gracePeriod=2 Jan 21 16:22:14 crc kubenswrapper[4739]: I0121 16:22:14.844884 4739 generic.go:334] "Generic (PLEG): container finished" podID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerID="e4ca844616dc0c2e1dae88958170714a307231dfa2e365415a9008231bae6c46" exitCode=0 Jan 21 16:22:14 crc kubenswrapper[4739]: I0121 16:22:14.844950 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerDied","Data":"e4ca844616dc0c2e1dae88958170714a307231dfa2e365415a9008231bae6c46"} Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.151475 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.270159 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content\") pod \"19fc3161-9e69-4168-8da0-1eb3267a21b0\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.270301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities\") pod \"19fc3161-9e69-4168-8da0-1eb3267a21b0\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.270674 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mr9s\" (UniqueName: \"kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s\") pod \"19fc3161-9e69-4168-8da0-1eb3267a21b0\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.271368 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities" (OuterVolumeSpecName: "utilities") pod "19fc3161-9e69-4168-8da0-1eb3267a21b0" (UID: "19fc3161-9e69-4168-8da0-1eb3267a21b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.279079 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s" (OuterVolumeSpecName: "kube-api-access-9mr9s") pod "19fc3161-9e69-4168-8da0-1eb3267a21b0" (UID: "19fc3161-9e69-4168-8da0-1eb3267a21b0"). InnerVolumeSpecName "kube-api-access-9mr9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.295901 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19fc3161-9e69-4168-8da0-1eb3267a21b0" (UID: "19fc3161-9e69-4168-8da0-1eb3267a21b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.372800 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mr9s\" (UniqueName: \"kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.372874 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.372887 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.856944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerDied","Data":"ddbb793bc23e659ba2c3890b29e628230e6e4684cf0021cbd416d4b129b07ac0"} Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.857017 4739 scope.go:117] "RemoveContainer" containerID="e4ca844616dc0c2e1dae88958170714a307231dfa2e365415a9008231bae6c46" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.857108 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.880139 4739 scope.go:117] "RemoveContainer" containerID="95cc3dd68878aba81871e7b3c26d4c214d01a54d0040f3bfcdfc6918934f4b05" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.899978 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.908630 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.912677 4739 scope.go:117] "RemoveContainer" containerID="067705aca2821bb06d43edf54929abdaf6620a8087c0c18bea90a2ac507ccb1b" Jan 21 16:22:16 crc kubenswrapper[4739]: I0121 16:22:16.791682 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" path="/var/lib/kubelet/pods/19fc3161-9e69-4168-8da0-1eb3267a21b0/volumes" Jan 21 16:22:25 crc kubenswrapper[4739]: I0121 16:22:25.969351 4739 generic.go:334] "Generic (PLEG): container finished" podID="254da8b1-762d-4c96-a7e1-fe39f6988eac" containerID="d3773ce03ec5daaa4d931e2989330efa7a78952868f18ac76d5b731ef2adea45" exitCode=0 Jan 21 16:22:25 crc kubenswrapper[4739]: I0121 16:22:25.969430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" event={"ID":"254da8b1-762d-4c96-a7e1-fe39f6988eac","Type":"ContainerDied","Data":"d3773ce03ec5daaa4d931e2989330efa7a78952868f18ac76d5b731ef2adea45"} Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.354432 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.488731 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.488854 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.488899 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmd5l\" (UniqueName: \"kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.489016 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.489035 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.489053 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.494450 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l" (OuterVolumeSpecName: "kube-api-access-tmd5l") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "kube-api-access-tmd5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.501948 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph" (OuterVolumeSpecName: "ceph") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.508203 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.527356 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory" (OuterVolumeSpecName: "inventory") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.530129 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.543986 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590588 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590623 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590635 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590649 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590662 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590674 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmd5l\" (UniqueName: \"kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.986520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" event={"ID":"254da8b1-762d-4c96-a7e1-fe39f6988eac","Type":"ContainerDied","Data":"6460871f3d3a86b66538c305b740d159eb5f973678a07ed3619aca1d196126f8"} Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.986569 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6460871f3d3a86b66538c305b740d159eb5f973678a07ed3619aca1d196126f8" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.986612 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.169976 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr"] Jan 21 16:22:28 crc kubenswrapper[4739]: E0121 16:22:28.170664 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254da8b1-762d-4c96-a7e1-fe39f6988eac" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.170754 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="254da8b1-762d-4c96-a7e1-fe39f6988eac" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:22:28 crc kubenswrapper[4739]: E0121 16:22:28.170860 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="registry-server" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.170950 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="registry-server" Jan 21 16:22:28 crc kubenswrapper[4739]: E0121 16:22:28.171036 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="extract-content" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.171114 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="extract-content" Jan 21 16:22:28 crc kubenswrapper[4739]: E0121 16:22:28.171200 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="extract-utilities" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.171303 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="extract-utilities" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.171634 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="254da8b1-762d-4c96-a7e1-fe39f6988eac" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.171738 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="registry-server" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.172536 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.175195 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.175471 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.176264 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.176526 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.178195 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.178383 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.179246 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.179251 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.179302 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.186190 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr"] Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.307977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308020 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308050 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308111 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308184 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308430 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308522 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308552 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308579 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308609 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cg9v\" (UniqueName: \"kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308643 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410185 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410557 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410630 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410669 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410701 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410736 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410772 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cg9v\" (UniqueName: \"kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410884 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410927 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410953 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.411333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.413971 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.417145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.417160 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.417698 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.417791 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.418588 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.421193 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.424105 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.430650 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.437804 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cg9v\" (UniqueName: \"kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.496237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:29 crc kubenswrapper[4739]: I0121 16:22:29.017238 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr"] Jan 21 16:22:30 crc kubenswrapper[4739]: I0121 16:22:30.003339 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" event={"ID":"9f1cbca1-44a3-4825-b255-dfb219fdbda7","Type":"ContainerStarted","Data":"ec077439aad2bf5cab32cbf6610c1bb67c53959117327191cab90a0dddb33372"} Jan 21 16:22:30 crc kubenswrapper[4739]: I0121 16:22:30.003622 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" event={"ID":"9f1cbca1-44a3-4825-b255-dfb219fdbda7","Type":"ContainerStarted","Data":"4a62274c193c7f3bda7cb7975ff8f99accab12bd291a842a82c722584bfcaf8c"} Jan 21 16:22:30 crc kubenswrapper[4739]: I0121 16:22:30.021036 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" podStartSLOduration=1.615798083 podStartE2EDuration="2.021004877s" podCreationTimestamp="2026-01-21 16:22:28 +0000 UTC" firstStartedPulling="2026-01-21 16:22:29.02687987 +0000 UTC m=+3380.717586124" lastFinishedPulling="2026-01-21 16:22:29.432086624 +0000 UTC m=+3381.122792918" observedRunningTime="2026-01-21 16:22:30.021002096 +0000 UTC m=+3381.711708350" watchObservedRunningTime="2026-01-21 16:22:30.021004877 +0000 UTC m=+3381.711711141" Jan 21 16:24:05 crc kubenswrapper[4739]: I0121 16:24:05.223019 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:24:05 crc kubenswrapper[4739]: I0121 16:24:05.223530 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:24:35 crc kubenswrapper[4739]: I0121 16:24:35.223273 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:24:35 crc kubenswrapper[4739]: I0121 16:24:35.223951 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.222531 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.223068 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.223119 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.223865 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.223907 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62" gracePeriod=600 Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.337091 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.339408 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.372017 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.404284 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62" exitCode=0 Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.404341 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62"} Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.404383 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.407283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.407530 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c56z\" (UniqueName: \"kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.407656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.509595 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.509641 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c56z\" (UniqueName: \"kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.509671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.510238 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.510393 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.532506 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c56z\" (UniqueName: \"kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.783094 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:06 crc kubenswrapper[4739]: I0121 16:25:06.351706 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:06 crc kubenswrapper[4739]: I0121 16:25:06.417118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27"} Jan 21 16:25:06 crc kubenswrapper[4739]: I0121 16:25:06.424118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerStarted","Data":"33e620cb82954691dc3413e916410fd12ca12f740779eb3b47c264c9314eb69a"} Jan 21 16:25:07 crc kubenswrapper[4739]: I0121 16:25:07.433319 4739 generic.go:334] "Generic (PLEG): container finished" podID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerID="dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73" exitCode=0 Jan 21 16:25:07 crc kubenswrapper[4739]: I0121 16:25:07.433429 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerDied","Data":"dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73"} Jan 21 16:25:08 crc kubenswrapper[4739]: I0121 16:25:08.445528 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerStarted","Data":"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc"} Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.128143 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.130237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.139698 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.187536 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.187581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.187633 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphkd\" (UniqueName: \"kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.289037 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.289101 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.289168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jphkd\" (UniqueName: \"kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.289571 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.289611 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.318843 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jphkd\" (UniqueName: \"kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.455858 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.457727 4739 generic.go:334] "Generic (PLEG): container finished" podID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerID="414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc" exitCode=0 Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.457983 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerDied","Data":"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc"} Jan 21 16:25:10 crc kubenswrapper[4739]: I0121 16:25:10.040717 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:10 crc kubenswrapper[4739]: W0121 16:25:10.043966 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9087973_ce8f_4145_95a3_3cc84cfd4d70.slice/crio-9762722eaa43bb9d5869d696f158b790adbabe51110f8e1a9a31304859eb0ff7 WatchSource:0}: Error finding container 9762722eaa43bb9d5869d696f158b790adbabe51110f8e1a9a31304859eb0ff7: Status 404 returned error can't find the container with id 9762722eaa43bb9d5869d696f158b790adbabe51110f8e1a9a31304859eb0ff7 Jan 21 16:25:10 crc kubenswrapper[4739]: I0121 16:25:10.470258 4739 generic.go:334] "Generic (PLEG): container finished" podID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerID="8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d" exitCode=0 Jan 21 16:25:10 crc kubenswrapper[4739]: I0121 16:25:10.470469 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerDied","Data":"8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d"} Jan 21 16:25:10 crc kubenswrapper[4739]: I0121 16:25:10.471102 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerStarted","Data":"9762722eaa43bb9d5869d696f158b790adbabe51110f8e1a9a31304859eb0ff7"} Jan 21 16:25:11 crc kubenswrapper[4739]: I0121 16:25:11.485262 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerStarted","Data":"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9"} Jan 21 16:25:11 crc kubenswrapper[4739]: I0121 16:25:11.489273 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerStarted","Data":"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab"} Jan 21 16:25:12 crc kubenswrapper[4739]: I0121 16:25:12.499475 4739 generic.go:334] "Generic (PLEG): container finished" podID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerID="351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9" exitCode=0 Jan 21 16:25:12 crc kubenswrapper[4739]: I0121 16:25:12.499643 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerDied","Data":"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9"} Jan 21 16:25:12 crc kubenswrapper[4739]: I0121 16:25:12.526968 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zk8jl" podStartSLOduration=4.598887884 podStartE2EDuration="7.526951231s" podCreationTimestamp="2026-01-21 16:25:05 +0000 UTC" firstStartedPulling="2026-01-21 16:25:07.435627222 +0000 UTC m=+3539.126333496" lastFinishedPulling="2026-01-21 16:25:10.363690579 +0000 UTC m=+3542.054396843" observedRunningTime="2026-01-21 16:25:11.542092578 +0000 UTC m=+3543.232798852" watchObservedRunningTime="2026-01-21 16:25:12.526951231 +0000 UTC m=+3544.217657485" Jan 21 16:25:13 crc kubenswrapper[4739]: I0121 16:25:13.543808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerStarted","Data":"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b"} Jan 21 16:25:13 crc kubenswrapper[4739]: I0121 16:25:13.566524 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cmnsq" podStartSLOduration=2.019234309 podStartE2EDuration="4.566506956s" podCreationTimestamp="2026-01-21 16:25:09 +0000 UTC" firstStartedPulling="2026-01-21 16:25:10.472566462 +0000 UTC m=+3542.163272726" lastFinishedPulling="2026-01-21 16:25:13.019839109 +0000 UTC m=+3544.710545373" observedRunningTime="2026-01-21 16:25:13.565766977 +0000 UTC m=+3545.256473241" watchObservedRunningTime="2026-01-21 16:25:13.566506956 +0000 UTC m=+3545.257213220" Jan 21 16:25:15 crc kubenswrapper[4739]: I0121 16:25:15.784087 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:15 crc kubenswrapper[4739]: I0121 16:25:15.784751 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:15 crc kubenswrapper[4739]: I0121 16:25:15.830211 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:16 crc kubenswrapper[4739]: I0121 16:25:16.621518 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:17 crc kubenswrapper[4739]: I0121 16:25:17.521023 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:18 crc kubenswrapper[4739]: E0121 16:25:18.416209 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7b3caf_bafb_4f68_850a_916ab297ff42.slice/crio-conmon-414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:25:18 crc kubenswrapper[4739]: I0121 16:25:18.582638 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zk8jl" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="registry-server" containerID="cri-o://dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab" gracePeriod=2 Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.004020 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.168062 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content\") pod \"6c7b3caf-bafb-4f68-850a-916ab297ff42\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.168163 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c56z\" (UniqueName: \"kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z\") pod \"6c7b3caf-bafb-4f68-850a-916ab297ff42\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.168232 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities\") pod \"6c7b3caf-bafb-4f68-850a-916ab297ff42\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.169659 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities" (OuterVolumeSpecName: "utilities") pod "6c7b3caf-bafb-4f68-850a-916ab297ff42" (UID: "6c7b3caf-bafb-4f68-850a-916ab297ff42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.176170 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z" (OuterVolumeSpecName: "kube-api-access-8c56z") pod "6c7b3caf-bafb-4f68-850a-916ab297ff42" (UID: "6c7b3caf-bafb-4f68-850a-916ab297ff42"). InnerVolumeSpecName "kube-api-access-8c56z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.231948 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c7b3caf-bafb-4f68-850a-916ab297ff42" (UID: "6c7b3caf-bafb-4f68-850a-916ab297ff42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.269989 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.270029 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.270044 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c56z\" (UniqueName: \"kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.456207 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.457001 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.502299 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.593732 4739 generic.go:334] "Generic (PLEG): container finished" podID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerID="dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab" exitCode=0 Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.593802 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.593884 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerDied","Data":"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab"} Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.595425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerDied","Data":"33e620cb82954691dc3413e916410fd12ca12f740779eb3b47c264c9314eb69a"} Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.595456 4739 scope.go:117] "RemoveContainer" containerID="dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.620986 4739 scope.go:117] "RemoveContainer" containerID="414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.635113 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.653748 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.654794 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.658384 4739 scope.go:117] "RemoveContainer" containerID="dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.713034 4739 scope.go:117] "RemoveContainer" containerID="dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab" Jan 21 16:25:19 crc kubenswrapper[4739]: E0121 16:25:19.721056 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab\": container with ID starting with dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab not found: ID does not exist" containerID="dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.721247 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab"} err="failed to get container status \"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab\": rpc error: code = NotFound desc = could not find container \"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab\": container with ID starting with dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab not found: ID does not exist" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.721361 4739 scope.go:117] "RemoveContainer" containerID="414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc" Jan 21 16:25:19 crc kubenswrapper[4739]: E0121 16:25:19.722772 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc\": container with ID starting with 414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc not found: ID does not exist" containerID="414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.722843 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc"} err="failed to get container status \"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc\": rpc error: code = NotFound desc = could not find container \"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc\": container with ID starting with 414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc not found: ID does not exist" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.722878 4739 scope.go:117] "RemoveContainer" containerID="dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73" Jan 21 16:25:19 crc kubenswrapper[4739]: E0121 16:25:19.726010 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73\": container with ID starting with dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73 not found: ID does not exist" containerID="dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.726059 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73"} err="failed to get container status \"dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73\": rpc error: code = NotFound desc = could not find container \"dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73\": container with ID starting with dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73 not found: ID does not exist" Jan 21 16:25:20 crc kubenswrapper[4739]: I0121 16:25:20.794066 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" path="/var/lib/kubelet/pods/6c7b3caf-bafb-4f68-850a-916ab297ff42/volumes" Jan 21 16:25:21 crc kubenswrapper[4739]: I0121 16:25:21.922997 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:22 crc kubenswrapper[4739]: I0121 16:25:22.624041 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cmnsq" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="registry-server" containerID="cri-o://c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b" gracePeriod=2 Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.109875 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.147774 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities\") pod \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.148007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content\") pod \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.148095 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jphkd\" (UniqueName: \"kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd\") pod \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.148456 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities" (OuterVolumeSpecName: "utilities") pod "e9087973-ce8f-4145-95a3-3cc84cfd4d70" (UID: "e9087973-ce8f-4145-95a3-3cc84cfd4d70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.148651 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.154312 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd" (OuterVolumeSpecName: "kube-api-access-jphkd") pod "e9087973-ce8f-4145-95a3-3cc84cfd4d70" (UID: "e9087973-ce8f-4145-95a3-3cc84cfd4d70"). InnerVolumeSpecName "kube-api-access-jphkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.196500 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9087973-ce8f-4145-95a3-3cc84cfd4d70" (UID: "e9087973-ce8f-4145-95a3-3cc84cfd4d70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.250618 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.250655 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jphkd\" (UniqueName: \"kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.634453 4739 generic.go:334] "Generic (PLEG): container finished" podID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerID="c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b" exitCode=0 Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.634523 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.634891 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerDied","Data":"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b"} Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.635048 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerDied","Data":"9762722eaa43bb9d5869d696f158b790adbabe51110f8e1a9a31304859eb0ff7"} Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.635171 4739 scope.go:117] "RemoveContainer" containerID="c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.669926 4739 scope.go:117] "RemoveContainer" containerID="351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.670721 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.678889 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.692564 4739 scope.go:117] "RemoveContainer" containerID="8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.728893 4739 scope.go:117] "RemoveContainer" containerID="c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b" Jan 21 16:25:23 crc kubenswrapper[4739]: E0121 16:25:23.729318 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b\": container with ID starting with c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b not found: ID does not exist" containerID="c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.729358 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b"} err="failed to get container status \"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b\": rpc error: code = NotFound desc = could not find container \"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b\": container with ID starting with c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b not found: ID does not exist" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.729387 4739 scope.go:117] "RemoveContainer" containerID="351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9" Jan 21 16:25:23 crc kubenswrapper[4739]: E0121 16:25:23.729606 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9\": container with ID starting with 351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9 not found: ID does not exist" containerID="351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.729640 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9"} err="failed to get container status \"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9\": rpc error: code = NotFound desc = could not find container \"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9\": container with ID starting with 351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9 not found: ID does not exist" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.729659 4739 scope.go:117] "RemoveContainer" containerID="8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d" Jan 21 16:25:23 crc kubenswrapper[4739]: E0121 16:25:23.730446 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d\": container with ID starting with 8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d not found: ID does not exist" containerID="8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.730475 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d"} err="failed to get container status \"8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d\": rpc error: code = NotFound desc = could not find container \"8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d\": container with ID starting with 8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d not found: ID does not exist" Jan 21 16:25:24 crc kubenswrapper[4739]: I0121 16:25:24.808094 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" path="/var/lib/kubelet/pods/e9087973-ce8f-4145-95a3-3cc84cfd4d70/volumes" Jan 21 16:25:28 crc kubenswrapper[4739]: E0121 16:25:28.632674 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7b3caf_bafb_4f68_850a_916ab297ff42.slice/crio-conmon-414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:25:31 crc kubenswrapper[4739]: I0121 16:25:31.716974 4739 generic.go:334] "Generic (PLEG): container finished" podID="9f1cbca1-44a3-4825-b255-dfb219fdbda7" containerID="ec077439aad2bf5cab32cbf6610c1bb67c53959117327191cab90a0dddb33372" exitCode=0 Jan 21 16:25:31 crc kubenswrapper[4739]: I0121 16:25:31.717051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" event={"ID":"9f1cbca1-44a3-4825-b255-dfb219fdbda7","Type":"ContainerDied","Data":"ec077439aad2bf5cab32cbf6610c1bb67c53959117327191cab90a0dddb33372"} Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.140280 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.275237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.275323 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.275386 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cg9v\" (UniqueName: \"kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.275459 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.275492 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276143 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276564 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276584 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276612 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276662 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276686 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.281028 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph" (OuterVolumeSpecName: "ceph") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.281106 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.281658 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v" (OuterVolumeSpecName: "kube-api-access-5cg9v") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "kube-api-access-5cg9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.306949 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.309005 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.316255 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.318756 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.319150 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.319987 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.330928 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.332286 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory" (OuterVolumeSpecName: "inventory") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.378912 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.378953 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.378967 4739 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.378980 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.378992 4739 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379003 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379014 4739 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379025 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cg9v\" (UniqueName: \"kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379036 4739 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379049 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379061 4739 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.739986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" event={"ID":"9f1cbca1-44a3-4825-b255-dfb219fdbda7","Type":"ContainerDied","Data":"4a62274c193c7f3bda7cb7975ff8f99accab12bd291a842a82c722584bfcaf8c"} Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.740031 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a62274c193c7f3bda7cb7975ff8f99accab12bd291a842a82c722584bfcaf8c" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.740053 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:25:38 crc kubenswrapper[4739]: E0121 16:25:38.843199 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7b3caf_bafb_4f68_850a_916ab297ff42.slice/crio-conmon-414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:25:49 crc kubenswrapper[4739]: E0121 16:25:49.088180 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7b3caf_bafb_4f68_850a_916ab297ff42.slice/crio-conmon-414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.278340 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279081 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="extract-utilities" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279098 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="extract-utilities" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279117 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279126 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279148 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="extract-content" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279157 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="extract-content" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279178 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="extract-content" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279186 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="extract-content" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279200 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279208 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279226 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1cbca1-44a3-4825-b255-dfb219fdbda7" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279235 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1cbca1-44a3-4825-b255-dfb219fdbda7" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279247 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="extract-utilities" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279255 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="extract-utilities" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279441 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279459 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1cbca1-44a3-4825-b255-dfb219fdbda7" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279478 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.280484 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.284507 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.284522 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.316939 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.336347 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.337718 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.352531 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.383879 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.400851 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-run\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.400913 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.400956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401002 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401061 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401099 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401160 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psjwq\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-kube-api-access-psjwq\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401206 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401225 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401247 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401274 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401296 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401320 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401390 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503446 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-scripts\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503470 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-run\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503533 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503555 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503576 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psjwq\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-kube-api-access-psjwq\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503652 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-dev\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503695 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503747 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503770 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503777 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503879 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503905 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnt9q\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-kube-api-access-lnt9q\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504001 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-run\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504021 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504039 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504102 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-ceph\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504161 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504235 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-lib-modules\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-sys\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504462 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505085 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505223 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505233 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-run\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505486 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505522 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505406 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.506021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.511903 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.513262 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.515038 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.519464 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.528385 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.545847 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psjwq\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-kube-api-access-psjwq\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.596368 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607311 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607368 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-lib-modules\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607392 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-sys\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607417 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607443 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607469 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607490 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-scripts\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607535 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-run\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607611 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-dev\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607651 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607707 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnt9q\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-kube-api-access-lnt9q\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607738 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607795 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-ceph\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.608159 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.608240 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-lib-modules\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.608216 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.608310 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-sys\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.608170 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.610542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.610761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.610922 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-run\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.610941 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.611145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-dev\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.612148 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-ceph\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.613877 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-scripts\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.616014 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.616323 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.618963 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.642924 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnt9q\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-kube-api-access-lnt9q\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.663702 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.148424 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.153062 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.156211 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.160589 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lc9pg" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.160751 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.182512 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.209728 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.245442 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.247476 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.259767 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.260014 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.335271 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.346894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhmtc\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347156 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347257 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347441 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347526 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7lr\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347599 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347757 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347862 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347951 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348041 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348132 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348208 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348291 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348359 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348462 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348580 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450139 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450475 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450578 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7lr\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450600 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450624 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450662 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450738 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450765 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450841 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450875 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450896 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450917 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450939 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450992 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhmtc\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.451017 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.451831 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.452214 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.452460 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.453482 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.453756 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.462694 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.472448 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.473035 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.474024 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.480277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.496485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.497022 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.506077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.510381 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.517956 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.518947 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7lr\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.539419 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.539491 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhmtc\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.582146 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.591112 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.638064 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: W0121 16:25:52.687190 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7353ecec_24ef_48a5_9046_95c8e0b77de0.slice/crio-241fa5d3d33de9599968a296992b3cd1ea46c5285ab5a2a8e59722abf1504821 WatchSource:0}: Error finding container 241fa5d3d33de9599968a296992b3cd1ea46c5285ab5a2a8e59722abf1504821: Status 404 returned error can't find the container with id 241fa5d3d33de9599968a296992b3cd1ea46c5285ab5a2a8e59722abf1504821 Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.699554 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.785732 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.894920 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.913666 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"7353ecec-24ef-48a5-9046-95c8e0b77de0","Type":"ContainerStarted","Data":"241fa5d3d33de9599968a296992b3cd1ea46c5285ab5a2a8e59722abf1504821"} Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.918219 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-125c-account-create-update-sv8nw"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.919652 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.922998 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.956888 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-n5z42"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.958002 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5z42" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.971745 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.971813 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjng7\" (UniqueName: \"kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.979457 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-125c-account-create-update-sv8nw"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.017737 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-n5z42"] Jan 21 16:25:53 crc kubenswrapper[4739]: W0121 16:25:53.058877 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e7c2005_9f9a_41b3_b7c0_7dc430637ba8.slice/crio-d00c15a0d473d0a8ec6c86c84199e89ed59fdd65fa073f891d99098b309496a6 WatchSource:0}: Error finding container d00c15a0d473d0a8ec6c86c84199e89ed59fdd65fa073f891d99098b309496a6: Status 404 returned error can't find the container with id d00c15a0d473d0a8ec6c86c84199e89ed59fdd65fa073f891d99098b309496a6 Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.073094 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.073148 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjng7\" (UniqueName: \"kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.073173 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.073213 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slsgg\" (UniqueName: \"kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.074134 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.076119 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.096299 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjng7\" (UniqueName: \"kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.153441 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.155013 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.163544 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.163768 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-5hs8m" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.163931 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.173660 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.175137 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.175195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slsgg\" (UniqueName: \"kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.176429 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.180366 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.240889 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slsgg\" (UniqueName: \"kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.246007 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.247606 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.255959 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.260990 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.269400 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.282751 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrfk\" (UniqueName: \"kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.282888 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.282922 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.282962 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.283151 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.301288 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400151 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400260 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrfk\" (UniqueName: \"kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400429 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400456 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sml4k\" (UniqueName: \"kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400512 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400539 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400572 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.402914 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.402942 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.415989 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.435342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrfk\" (UniqueName: \"kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.442345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.478734 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.506989 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sml4k\" (UniqueName: \"kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.507033 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.507060 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.507115 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.507140 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.507553 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.508486 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.509619 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.512681 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.543018 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sml4k\" (UniqueName: \"kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.567457 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:53.614846 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:53.910270 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:53.938291 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8","Type":"ContainerStarted","Data":"d00c15a0d473d0a8ec6c86c84199e89ed59fdd65fa073f891d99098b309496a6"} Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.059840 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.200148 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-125c-account-create-update-sv8nw"] Jan 21 16:25:54 crc kubenswrapper[4739]: W0121 16:25:54.209830 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9df549f9_8d1c_4b17_bda4_eeaa772d1554.slice/crio-1030fc1ed1f27e554e38eb0b733c704d424fee658d6a6a4e2ac60e3beee5865d WatchSource:0}: Error finding container 1030fc1ed1f27e554e38eb0b733c704d424fee658d6a6a4e2ac60e3beee5865d: Status 404 returned error can't find the container with id 1030fc1ed1f27e554e38eb0b733c704d424fee658d6a6a4e2ac60e3beee5865d Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.680894 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-n5z42"] Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.976867 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.978421 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerStarted","Data":"59631a90156d4429e60246f2694bd2d8ef0aeb59dc5263292dcf0e82fc30c9f0"} Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.982115 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-n5z42" event={"ID":"dca676c7-1887-4337-b60b-c782c3002f46","Type":"ContainerStarted","Data":"937353ffeb5e12937157fc06537561e940ed7a0ee8f5e44a856df20acd919bb5"} Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.984847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-125c-account-create-update-sv8nw" event={"ID":"294fb480-1e0e-452c-979d-affc62bad155","Type":"ContainerStarted","Data":"ba464ff04d4f18050b9490669f1f43d4c74bf6098d3f47a39bcdd47ebd029791"} Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.985237 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.986600 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerStarted","Data":"1030fc1ed1f27e554e38eb0b733c704d424fee658d6a6a4e2ac60e3beee5865d"} Jan 21 16:25:55 crc kubenswrapper[4739]: I0121 16:25:55.817066 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:25:55 crc kubenswrapper[4739]: I0121 16:25:55.912893 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:25:55 crc kubenswrapper[4739]: I0121 16:25:55.914391 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:55 crc kubenswrapper[4739]: I0121 16:25:55.951331 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 16:25:55 crc kubenswrapper[4739]: I0121 16:25:55.960580 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.023982 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.024462 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.024563 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mtld\" (UniqueName: \"kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.024679 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.024773 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.024885 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.025017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.070053 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerStarted","Data":"09d021b9095469c9cc5cc8c1c0c12531dda0c54ca9ac04d3e8bbb5ef23b9e619"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.072300 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.116516 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"7353ecec-24ef-48a5-9046-95c8e0b77de0","Type":"ContainerStarted","Data":"5776cf963efc905ebe7165de20c65b0de7dc7b08c69f7edec29395da40cbbf22"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128661 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128682 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mtld\" (UniqueName: \"kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128798 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128849 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128924 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.130921 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.132168 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.132479 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.139760 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerStarted","Data":"a0e65624a360676f7fa47fc415e6b5039671cf9d298a6726b96db2cd44f590c7"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.140422 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.142621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.149222 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.166179 4739 generic.go:334] "Generic (PLEG): container finished" podID="dca676c7-1887-4337-b60b-c782c3002f46" containerID="b6f702ea2dd3ff28c30d00400b0b806729c8217c06fd4cd13b82e7615d978dd8" exitCode=0 Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.166283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-n5z42" event={"ID":"dca676c7-1887-4337-b60b-c782c3002f46","Type":"ContainerDied","Data":"b6f702ea2dd3ff28c30d00400b0b806729c8217c06fd4cd13b82e7615d978dd8"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.214684 4739 generic.go:334] "Generic (PLEG): container finished" podID="294fb480-1e0e-452c-979d-affc62bad155" containerID="1fbdaf4d566a04f7481712fb1909970289f16ac610cc2410258dcbbf919b0776" exitCode=0 Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.214767 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-125c-account-create-update-sv8nw" event={"ID":"294fb480-1e0e-452c-979d-affc62bad155","Type":"ContainerDied","Data":"1fbdaf4d566a04f7481712fb1909970289f16ac610cc2410258dcbbf919b0776"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.233190 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerStarted","Data":"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.261626 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8","Type":"ContainerStarted","Data":"46e75c4f2f215a62056f4d80b4e2ca05c6e97efdc451a05e5005b7ddb16a2d0b"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.267361 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-97dd88d6d-7bgrq"] Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.284084 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.279366 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mtld\" (UniqueName: \"kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.309842 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerStarted","Data":"6627beb33e730052161bb8f0dd30957c352f5182692e6c72b468019f36bee33c"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.348429 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-97dd88d6d-7bgrq"] Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-combined-ca-bundle\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440061 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-tls-certs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440211 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-scripts\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440273 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5wb6\" (UniqueName: \"kubernetes.io/projected/cdecd60b-660a-4039-a35b-29fec73c85a7-kube-api-access-r5wb6\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-secret-key\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440378 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-config-data\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440414 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdecd60b-660a-4039-a35b-29fec73c85a7-logs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543015 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-secret-key\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543086 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-config-data\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543595 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdecd60b-660a-4039-a35b-29fec73c85a7-logs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543658 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-combined-ca-bundle\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543674 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-tls-certs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543802 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-scripts\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543901 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wb6\" (UniqueName: \"kubernetes.io/projected/cdecd60b-660a-4039-a35b-29fec73c85a7-kube-api-access-r5wb6\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.545165 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-config-data\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.545381 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdecd60b-660a-4039-a35b-29fec73c85a7-logs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.546956 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-scripts\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.550691 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-combined-ca-bundle\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.554264 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.585355 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-secret-key\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.586054 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-tls-certs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.592646 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wb6\" (UniqueName: \"kubernetes.io/projected/cdecd60b-660a-4039-a35b-29fec73c85a7-kube-api-access-r5wb6\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.703342 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.355071 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8","Type":"ContainerStarted","Data":"d6a959f2da3dbbb60ec51652a688092afee571a231cee7bcc1998c5ee4f661db"} Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.365533 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"7353ecec-24ef-48a5-9046-95c8e0b77de0","Type":"ContainerStarted","Data":"9f41746f8a8a5748ec1110616153f6dc14cefc355c9881a0b51e4585a9d11180"} Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.413015 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.952219677 podStartE2EDuration="6.412997098s" podCreationTimestamp="2026-01-21 16:25:51 +0000 UTC" firstStartedPulling="2026-01-21 16:25:53.064199016 +0000 UTC m=+3584.754905280" lastFinishedPulling="2026-01-21 16:25:54.524976437 +0000 UTC m=+3586.215682701" observedRunningTime="2026-01-21 16:25:57.397676036 +0000 UTC m=+3589.088382300" watchObservedRunningTime="2026-01-21 16:25:57.412997098 +0000 UTC m=+3589.103703362" Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.542068 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=4.803505827 podStartE2EDuration="6.542041028s" podCreationTimestamp="2026-01-21 16:25:51 +0000 UTC" firstStartedPulling="2026-01-21 16:25:52.699294243 +0000 UTC m=+3584.390000507" lastFinishedPulling="2026-01-21 16:25:54.437829444 +0000 UTC m=+3586.128535708" observedRunningTime="2026-01-21 16:25:57.459360284 +0000 UTC m=+3589.150066548" watchObservedRunningTime="2026-01-21 16:25:57.542041028 +0000 UTC m=+3589.232747282" Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.552173 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:25:57 crc kubenswrapper[4739]: W0121 16:25:57.780380 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdecd60b_660a_4039_a35b_29fec73c85a7.slice/crio-ecdf0c69378b57da479a6c12a0d9160e807ebcb77af421302ff2b74eacde478a WatchSource:0}: Error finding container ecdf0c69378b57da479a6c12a0d9160e807ebcb77af421302ff2b74eacde478a: Status 404 returned error can't find the container with id ecdf0c69378b57da479a6c12a0d9160e807ebcb77af421302ff2b74eacde478a Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.792060 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-97dd88d6d-7bgrq"] Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.061978 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5z42" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.142573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slsgg\" (UniqueName: \"kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg\") pod \"dca676c7-1887-4337-b60b-c782c3002f46\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.142689 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts\") pod \"dca676c7-1887-4337-b60b-c782c3002f46\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.143735 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dca676c7-1887-4337-b60b-c782c3002f46" (UID: "dca676c7-1887-4337-b60b-c782c3002f46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.161627 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg" (OuterVolumeSpecName: "kube-api-access-slsgg") pod "dca676c7-1887-4337-b60b-c782c3002f46" (UID: "dca676c7-1887-4337-b60b-c782c3002f46"). InnerVolumeSpecName "kube-api-access-slsgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.167102 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.243829 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts\") pod \"294fb480-1e0e-452c-979d-affc62bad155\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.254948 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjng7\" (UniqueName: \"kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7\") pod \"294fb480-1e0e-452c-979d-affc62bad155\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.246247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "294fb480-1e0e-452c-979d-affc62bad155" (UID: "294fb480-1e0e-452c-979d-affc62bad155"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.255967 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.256002 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slsgg\" (UniqueName: \"kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.256020 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.266292 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7" (OuterVolumeSpecName: "kube-api-access-wjng7") pod "294fb480-1e0e-452c-979d-affc62bad155" (UID: "294fb480-1e0e-452c-979d-affc62bad155"). InnerVolumeSpecName "kube-api-access-wjng7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.357858 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjng7\" (UniqueName: \"kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.397677 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerStarted","Data":"aed9f4c99518135fcdf36fce64860e371d8a172abe3cd155d811d26f016f9f0b"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.397796 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-log" containerID="cri-o://a0e65624a360676f7fa47fc415e6b5039671cf9d298a6726b96db2cd44f590c7" gracePeriod=30 Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.398320 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-httpd" containerID="cri-o://aed9f4c99518135fcdf36fce64860e371d8a172abe3cd155d811d26f016f9f0b" gracePeriod=30 Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.410594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-n5z42" event={"ID":"dca676c7-1887-4337-b60b-c782c3002f46","Type":"ContainerDied","Data":"937353ffeb5e12937157fc06537561e940ed7a0ee8f5e44a856df20acd919bb5"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.410629 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="937353ffeb5e12937157fc06537561e940ed7a0ee8f5e44a856df20acd919bb5" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.410686 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5z42" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.422965 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.422947106 podStartE2EDuration="7.422947106s" podCreationTimestamp="2026-01-21 16:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:25:58.41827232 +0000 UTC m=+3590.108978584" watchObservedRunningTime="2026-01-21 16:25:58.422947106 +0000 UTC m=+3590.113653370" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.451572 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerStarted","Data":"1b4e559dfd3f1dad65b69a6216ec778f0f338b9761331fc0616f62380df78ddf"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.464198 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-125c-account-create-update-sv8nw" event={"ID":"294fb480-1e0e-452c-979d-affc62bad155","Type":"ContainerDied","Data":"ba464ff04d4f18050b9490669f1f43d4c74bf6098d3f47a39bcdd47ebd029791"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.464237 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba464ff04d4f18050b9490669f1f43d4c74bf6098d3f47a39bcdd47ebd029791" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.464316 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.487243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerStarted","Data":"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.487410 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-log" containerID="cri-o://ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" gracePeriod=30 Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.487541 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-httpd" containerID="cri-o://c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" gracePeriod=30 Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.500382 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97dd88d6d-7bgrq" event={"ID":"cdecd60b-660a-4039-a35b-29fec73c85a7","Type":"ContainerStarted","Data":"ecdf0c69378b57da479a6c12a0d9160e807ebcb77af421302ff2b74eacde478a"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.850645 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.850629717 podStartE2EDuration="7.850629717s" podCreationTimestamp="2026-01-21 16:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:25:58.529833351 +0000 UTC m=+3590.220539615" watchObservedRunningTime="2026-01-21 16:25:58.850629717 +0000 UTC m=+3590.541335981" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.488434 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.547789 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7b3caf_bafb_4f68_850a_916ab297ff42.slice/crio-conmon-414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589086 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589136 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589200 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589245 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589263 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7lr\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589308 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589342 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589456 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589723 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589836 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.591205 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs" (OuterVolumeSpecName: "logs") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.593982 4739 generic.go:334] "Generic (PLEG): container finished" podID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerID="c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" exitCode=0 Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594247 4739 generic.go:334] "Generic (PLEG): container finished" podID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerID="ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" exitCode=143 Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594288 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerDied","Data":"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc"} Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594312 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerDied","Data":"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5"} Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerDied","Data":"1030fc1ed1f27e554e38eb0b733c704d424fee658d6a6a4e2ac60e3beee5865d"} Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594335 4739 scope.go:117] "RemoveContainer" containerID="c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594444 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.600747 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.604317 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts" (OuterVolumeSpecName: "scripts") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.612051 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr" (OuterVolumeSpecName: "kube-api-access-ss7lr") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "kube-api-access-ss7lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.612408 4739 generic.go:334] "Generic (PLEG): container finished" podID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerID="aed9f4c99518135fcdf36fce64860e371d8a172abe3cd155d811d26f016f9f0b" exitCode=0 Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.612435 4739 generic.go:334] "Generic (PLEG): container finished" podID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerID="a0e65624a360676f7fa47fc415e6b5039671cf9d298a6726b96db2cd44f590c7" exitCode=143 Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.612454 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerDied","Data":"aed9f4c99518135fcdf36fce64860e371d8a172abe3cd155d811d26f016f9f0b"} Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.612481 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerDied","Data":"a0e65624a360676f7fa47fc415e6b5039671cf9d298a6726b96db2cd44f590c7"} Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.622038 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph" (OuterVolumeSpecName: "ceph") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.674207 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692426 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692468 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692484 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692498 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692525 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692539 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7lr\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.733999 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data" (OuterVolumeSpecName: "config-data") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.739261 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.746771 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.796037 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.796076 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.796085 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.867002 4739 scope.go:117] "RemoveContainer" containerID="ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.936333 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.951593 4739 scope.go:117] "RemoveContainer" containerID="c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.951869 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.952349 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc\": container with ID starting with c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc not found: ID does not exist" containerID="c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.962801 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc"} err="failed to get container status \"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc\": rpc error: code = NotFound desc = could not find container \"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc\": container with ID starting with c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc not found: ID does not exist" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.962923 4739 scope.go:117] "RemoveContainer" containerID="ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.966414 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5\": container with ID starting with ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5 not found: ID does not exist" containerID="ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.966453 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5"} err="failed to get container status \"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5\": rpc error: code = NotFound desc = could not find container \"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5\": container with ID starting with ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5 not found: ID does not exist" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.966481 4739 scope.go:117] "RemoveContainer" containerID="c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.970014 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc"} err="failed to get container status \"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc\": rpc error: code = NotFound desc = could not find container \"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc\": container with ID starting with c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc not found: ID does not exist" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.970070 4739 scope.go:117] "RemoveContainer" containerID="ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.975054 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5"} err="failed to get container status \"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5\": rpc error: code = NotFound desc = could not find container \"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5\": container with ID starting with ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5 not found: ID does not exist" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.984665 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.985177 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-httpd" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985204 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-httpd" Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.985223 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca676c7-1887-4337-b60b-c782c3002f46" containerName="mariadb-database-create" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985231 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca676c7-1887-4337-b60b-c782c3002f46" containerName="mariadb-database-create" Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.985249 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-log" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985257 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-log" Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.985271 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294fb480-1e0e-452c-979d-affc62bad155" containerName="mariadb-account-create-update" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985279 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="294fb480-1e0e-452c-979d-affc62bad155" containerName="mariadb-account-create-update" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985531 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca676c7-1887-4337-b60b-c782c3002f46" containerName="mariadb-database-create" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985555 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-httpd" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985578 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="294fb480-1e0e-452c-979d-affc62bad155" containerName="mariadb-account-create-update" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985594 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-log" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.986876 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.990218 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.990496 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.028172 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.043939 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhmtc\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101699 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101800 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101897 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101927 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102006 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102038 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102069 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102404 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-logs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102464 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-ceph\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102502 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102541 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-scripts\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-config-data\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102738 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102794 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9lj\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-kube-api-access-pd9lj\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.108872 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs" (OuterVolumeSpecName: "logs") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.112591 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.137368 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph" (OuterVolumeSpecName: "ceph") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.156332 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.159092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc" (OuterVolumeSpecName: "kube-api-access-nhmtc") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "kube-api-access-nhmtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.159410 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts" (OuterVolumeSpecName: "scripts") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205052 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-scripts\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205146 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205200 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205367 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-config-data\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205401 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205501 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd9lj\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-kube-api-access-pd9lj\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-logs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205584 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-ceph\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205653 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205723 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhmtc\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205784 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205799 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205811 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205856 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205876 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.208913 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.209335 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.209795 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-scripts\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.211099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-logs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.216263 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.220421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-ceph\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.235155 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-config-data\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.237082 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.250298 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd9lj\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-kube-api-access-pd9lj\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.250800 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.284983 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.307962 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data" (OuterVolumeSpecName: "config-data") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.308414 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.308445 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.308457 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.332662 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.348982 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.410887 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.618331 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.638196 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerDied","Data":"59631a90156d4429e60246f2694bd2d8ef0aeb59dc5263292dcf0e82fc30c9f0"} Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.638258 4739 scope.go:117] "RemoveContainer" containerID="aed9f4c99518135fcdf36fce64860e371d8a172abe3cd155d811d26f016f9f0b" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.638426 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.688133 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.698381 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.753458 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:26:00 crc kubenswrapper[4739]: E0121 16:26:00.754155 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-log" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.755996 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-log" Jan 21 16:26:00 crc kubenswrapper[4739]: E0121 16:26:00.756125 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-httpd" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.756233 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-httpd" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.756567 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-log" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.756655 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-httpd" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.757880 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.820994 4739 scope.go:117] "RemoveContainer" containerID="a0e65624a360676f7fa47fc415e6b5039671cf9d298a6726b96db2cd44f590c7" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.821474 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.822564 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.849860 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" path="/var/lib/kubelet/pods/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d/volumes" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.852515 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" path="/var/lib/kubelet/pods/9df549f9-8d1c-4b17-bda4-eeaa772d1554/volumes" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.879135 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.928482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.928734 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929058 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929149 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929225 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-logs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929293 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn5r6\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-kube-api-access-gn5r6\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929370 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929458 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929534 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034183 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034360 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034435 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034632 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-logs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034669 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn5r6\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-kube-api-access-gn5r6\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034713 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034782 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034839 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.037001 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.037846 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-logs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.038799 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.047438 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.054042 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.054749 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.055534 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.060976 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.123687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn5r6\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-kube-api-access-gn5r6\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.157290 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.188201 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.597569 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.665126 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 21 16:26:02 crc kubenswrapper[4739]: I0121 16:26:02.300057 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="3e7c2005-9f9a-41b3-b7c0-7dc430637ba8" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 16:26:02 crc kubenswrapper[4739]: I0121 16:26:02.319873 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:26:02 crc kubenswrapper[4739]: I0121 16:26:02.366345 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="7353ecec-24ef-48a5-9046-95c8e0b77de0" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 16:26:02 crc kubenswrapper[4739]: I0121 16:26:02.713110 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1299ed2d-0e46-46a5-8dd1-89a635cc4356","Type":"ContainerStarted","Data":"85f0fb04ca7a6e2446eba25236ca52b485f9c21d2ffd277dc33cc65d3c4a4526"} Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.408308 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-hgftl"] Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.412011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.426763 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-hgftl"] Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.457565 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-c8ppn" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.457776 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.504945 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.506098 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.506202 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.506271 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7np85\" (UniqueName: \"kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.559853 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:26:03 crc kubenswrapper[4739]: W0121 16:26:03.579965 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82cfddd4_081e_4b33_82e2_5dbd44a11e56.slice/crio-e6dad2aca7d0eacccc252d4e5eb19a0989a9183ebe6eb56b07df92936f8c79e1 WatchSource:0}: Error finding container e6dad2aca7d0eacccc252d4e5eb19a0989a9183ebe6eb56b07df92936f8c79e1: Status 404 returned error can't find the container with id e6dad2aca7d0eacccc252d4e5eb19a0989a9183ebe6eb56b07df92936f8c79e1 Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.609376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.609459 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.609617 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.609645 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7np85\" (UniqueName: \"kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.616088 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.620297 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.626476 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.636151 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7np85\" (UniqueName: \"kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.747484 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82cfddd4-081e-4b33-82e2-5dbd44a11e56","Type":"ContainerStarted","Data":"e6dad2aca7d0eacccc252d4e5eb19a0989a9183ebe6eb56b07df92936f8c79e1"} Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.752412 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1299ed2d-0e46-46a5-8dd1-89a635cc4356","Type":"ContainerStarted","Data":"8b34d9957fddc9980f22728541494296abd1fca0991e5d8f7000a781f51270c7"} Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.802376 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:04 crc kubenswrapper[4739]: I0121 16:26:04.769350 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82cfddd4-081e-4b33-82e2-5dbd44a11e56","Type":"ContainerStarted","Data":"064c864d2fc8ac711a53c683f63a6d30c0c50111816ae854818a404dad446e6f"} Jan 21 16:26:04 crc kubenswrapper[4739]: I0121 16:26:04.820335 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1299ed2d-0e46-46a5-8dd1-89a635cc4356","Type":"ContainerStarted","Data":"fc7fed6bcc7e1d735f58dbbcaaab4fe7bc991d54f76ef5564ffaf7935cbdb429"} Jan 21 16:26:04 crc kubenswrapper[4739]: I0121 16:26:04.910419 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.910396868 podStartE2EDuration="4.910396868s" podCreationTimestamp="2026-01-21 16:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:26:04.847285201 +0000 UTC m=+3596.537991465" watchObservedRunningTime="2026-01-21 16:26:04.910396868 +0000 UTC m=+3596.601103132" Jan 21 16:26:04 crc kubenswrapper[4739]: I0121 16:26:04.949239 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-hgftl"] Jan 21 16:26:05 crc kubenswrapper[4739]: I0121 16:26:05.798937 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hgftl" event={"ID":"fbe8edfb-cbd4-4468-be6c-40d6af0682ee","Type":"ContainerStarted","Data":"0ec9ca1ea652c463e9280de512771a29c23eb9267a9762011e626690c2f82407"} Jan 21 16:26:05 crc kubenswrapper[4739]: I0121 16:26:05.852221 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.852202434 podStartE2EDuration="6.852202434s" podCreationTimestamp="2026-01-21 16:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:26:05.847102747 +0000 UTC m=+3597.537809011" watchObservedRunningTime="2026-01-21 16:26:05.852202434 +0000 UTC m=+3597.542908698" Jan 21 16:26:06 crc kubenswrapper[4739]: I0121 16:26:06.608198 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 21 16:26:06 crc kubenswrapper[4739]: I0121 16:26:06.688712 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 21 16:26:06 crc kubenswrapper[4739]: I0121 16:26:06.834144 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82cfddd4-081e-4b33-82e2-5dbd44a11e56","Type":"ContainerStarted","Data":"f4ad484f90c8ad24d77f2ef4efe8a746bb7eb0ccd87613b6f8b0be20128660ae"} Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.619255 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.620879 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.798605 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.798672 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.879204 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.879251 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.188958 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.189006 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.231447 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.235378 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.888121 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.888440 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:13 crc kubenswrapper[4739]: I0121 16:26:13.911881 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:26:13 crc kubenswrapper[4739]: I0121 16:26:13.912293 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.558435 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.559554 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.560513 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.560591 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.563641 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.567202 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.977344 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerStarted","Data":"96e0aed8cf8dafdc050761cd871eb0bbaa2165b2bce2a6c7085b85d540e43a1a"} Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.977734 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-94454c4b5-lnx6s" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon-log" containerID="cri-o://3cca007a7205c23db7a20621871fcdb517f7c1ef6042a29edf87f37e02f186be" gracePeriod=30 Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.977939 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerStarted","Data":"3cca007a7205c23db7a20621871fcdb517f7c1ef6042a29edf87f37e02f186be"} Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.978078 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-94454c4b5-lnx6s" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon" containerID="cri-o://96e0aed8cf8dafdc050761cd871eb0bbaa2165b2bce2a6c7085b85d540e43a1a" gracePeriod=30 Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.985827 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerStarted","Data":"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11"} Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.985867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerStarted","Data":"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6"} Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.991494 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97dd88d6d-7bgrq" event={"ID":"cdecd60b-660a-4039-a35b-29fec73c85a7","Type":"ContainerStarted","Data":"0db29e51458c97e25274d4e646c49d54badd68d36083d852d7b0c138bcd34537"} Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.994910 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hgftl" event={"ID":"fbe8edfb-cbd4-4468-be6c-40d6af0682ee","Type":"ContainerStarted","Data":"6bcd6ee067e29520ec5a3f31d7b83d2d9be6015725c99f0d8474b82103c528e6"} Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.017859 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerStarted","Data":"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43"} Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.017915 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerStarted","Data":"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1"} Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.018067 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6967c7d685-tgtjz" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon-log" containerID="cri-o://4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1" gracePeriod=30 Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.018385 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6967c7d685-tgtjz" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon" containerID="cri-o://ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43" gracePeriod=30 Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.020913 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-94454c4b5-lnx6s" podStartSLOduration=3.103653968 podStartE2EDuration="25.020894168s" podCreationTimestamp="2026-01-21 16:25:53 +0000 UTC" firstStartedPulling="2026-01-21 16:25:55.043260114 +0000 UTC m=+3586.733966378" lastFinishedPulling="2026-01-21 16:26:16.960500314 +0000 UTC m=+3608.651206578" observedRunningTime="2026-01-21 16:26:18.018601477 +0000 UTC m=+3609.709307741" watchObservedRunningTime="2026-01-21 16:26:18.020894168 +0000 UTC m=+3609.711600432" Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.064569 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6967c7d685-tgtjz" podStartSLOduration=3.208744683 podStartE2EDuration="25.064550722s" podCreationTimestamp="2026-01-21 16:25:53 +0000 UTC" firstStartedPulling="2026-01-21 16:25:55.105937429 +0000 UTC m=+3586.796643693" lastFinishedPulling="2026-01-21 16:26:16.961743468 +0000 UTC m=+3608.652449732" observedRunningTime="2026-01-21 16:26:18.062879228 +0000 UTC m=+3609.753585492" watchObservedRunningTime="2026-01-21 16:26:18.064550722 +0000 UTC m=+3609.755256986" Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.105413 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-hgftl" podStartSLOduration=3.010900431 podStartE2EDuration="15.105395151s" podCreationTimestamp="2026-01-21 16:26:03 +0000 UTC" firstStartedPulling="2026-01-21 16:26:04.941835333 +0000 UTC m=+3596.632541597" lastFinishedPulling="2026-01-21 16:26:17.036330053 +0000 UTC m=+3608.727036317" observedRunningTime="2026-01-21 16:26:18.095383041 +0000 UTC m=+3609.786089305" watchObservedRunningTime="2026-01-21 16:26:18.105395151 +0000 UTC m=+3609.796101405" Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.142067 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f9d85f6b8-vfdq7" podStartSLOduration=3.801226979 podStartE2EDuration="23.142051437s" podCreationTimestamp="2026-01-21 16:25:55 +0000 UTC" firstStartedPulling="2026-01-21 16:25:57.576433243 +0000 UTC m=+3589.267139507" lastFinishedPulling="2026-01-21 16:26:16.917257701 +0000 UTC m=+3608.607963965" observedRunningTime="2026-01-21 16:26:18.131169334 +0000 UTC m=+3609.821875608" watchObservedRunningTime="2026-01-21 16:26:18.142051437 +0000 UTC m=+3609.832757701" Jan 21 16:26:19 crc kubenswrapper[4739]: I0121 16:26:19.028231 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97dd88d6d-7bgrq" event={"ID":"cdecd60b-660a-4039-a35b-29fec73c85a7","Type":"ContainerStarted","Data":"f3466572dc84029b6b4e4e16b42a891c8b48cdb70b399f1a5939ec2f89fabceb"} Jan 21 16:26:19 crc kubenswrapper[4739]: I0121 16:26:19.053112 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-97dd88d6d-7bgrq" podStartSLOduration=3.893451769 podStartE2EDuration="23.053088555s" podCreationTimestamp="2026-01-21 16:25:56 +0000 UTC" firstStartedPulling="2026-01-21 16:25:57.803402846 +0000 UTC m=+3589.494109110" lastFinishedPulling="2026-01-21 16:26:16.963039632 +0000 UTC m=+3608.653745896" observedRunningTime="2026-01-21 16:26:19.050099615 +0000 UTC m=+3610.740805889" watchObservedRunningTime="2026-01-21 16:26:19.053088555 +0000 UTC m=+3610.743794819" Jan 21 16:26:23 crc kubenswrapper[4739]: I0121 16:26:23.513306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:26:23 crc kubenswrapper[4739]: I0121 16:26:23.616043 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:26:26 crc kubenswrapper[4739]: I0121 16:26:26.556048 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:26:26 crc kubenswrapper[4739]: I0121 16:26:26.556402 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:26:26 crc kubenswrapper[4739]: I0121 16:26:26.705012 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:26:26 crc kubenswrapper[4739]: I0121 16:26:26.705829 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:26:36 crc kubenswrapper[4739]: I0121 16:26:36.558179 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Jan 21 16:26:36 crc kubenswrapper[4739]: I0121 16:26:36.706888 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-97dd88d6d-7bgrq" podUID="cdecd60b-660a-4039-a35b-29fec73c85a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 21 16:26:42 crc kubenswrapper[4739]: I0121 16:26:42.279968 4739 generic.go:334] "Generic (PLEG): container finished" podID="fbe8edfb-cbd4-4468-be6c-40d6af0682ee" containerID="6bcd6ee067e29520ec5a3f31d7b83d2d9be6015725c99f0d8474b82103c528e6" exitCode=0 Jan 21 16:26:42 crc kubenswrapper[4739]: I0121 16:26:42.281652 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hgftl" event={"ID":"fbe8edfb-cbd4-4468-be6c-40d6af0682ee","Type":"ContainerDied","Data":"6bcd6ee067e29520ec5a3f31d7b83d2d9be6015725c99f0d8474b82103c528e6"} Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.644850 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.726016 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle\") pod \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.726139 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7np85\" (UniqueName: \"kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85\") pod \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.726159 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data\") pod \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.726219 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data\") pod \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.732145 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85" (OuterVolumeSpecName: "kube-api-access-7np85") pod "fbe8edfb-cbd4-4468-be6c-40d6af0682ee" (UID: "fbe8edfb-cbd4-4468-be6c-40d6af0682ee"). InnerVolumeSpecName "kube-api-access-7np85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.732416 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "fbe8edfb-cbd4-4468-be6c-40d6af0682ee" (UID: "fbe8edfb-cbd4-4468-be6c-40d6af0682ee"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.736247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data" (OuterVolumeSpecName: "config-data") pod "fbe8edfb-cbd4-4468-be6c-40d6af0682ee" (UID: "fbe8edfb-cbd4-4468-be6c-40d6af0682ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.760556 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbe8edfb-cbd4-4468-be6c-40d6af0682ee" (UID: "fbe8edfb-cbd4-4468-be6c-40d6af0682ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.827840 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.827870 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7np85\" (UniqueName: \"kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.827881 4739 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.827889 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:45 crc kubenswrapper[4739]: I0121 16:26:45.324080 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hgftl" event={"ID":"fbe8edfb-cbd4-4468-be6c-40d6af0682ee","Type":"ContainerDied","Data":"0ec9ca1ea652c463e9280de512771a29c23eb9267a9762011e626690c2f82407"} Jan 21 16:26:45 crc kubenswrapper[4739]: I0121 16:26:45.324388 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec9ca1ea652c463e9280de512771a29c23eb9267a9762011e626690c2f82407" Jan 21 16:26:45 crc kubenswrapper[4739]: I0121 16:26:45.324126 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.044549 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: E0121 16:26:46.045226 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe8edfb-cbd4-4468-be6c-40d6af0682ee" containerName="manila-db-sync" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.045246 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe8edfb-cbd4-4468-be6c-40d6af0682ee" containerName="manila-db-sync" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.045463 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe8edfb-cbd4-4468-be6c-40d6af0682ee" containerName="manila-db-sync" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.047357 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.055891 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.055937 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.056120 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-c8ppn" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.056152 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.094039 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.114330 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.116269 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.119148 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.154717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrk9c\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.154847 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.154876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.154919 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.154963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.155051 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.155179 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.155207 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.165266 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.248810 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c846ff5b9-256zk"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.250990 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.257618 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.257668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.257731 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.257762 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.257794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrk9c\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258486 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258526 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258534 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258587 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258664 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258690 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258724 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfnxx\" (UniqueName: \"kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258747 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.268652 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.269109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.270542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.272305 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.272378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.308642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrk9c\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.312062 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c846ff5b9-256zk"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361475 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361505 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjj5\" (UniqueName: \"kubernetes.io/projected/5a695c51-4390-4957-8320-d381011ebcf9-kube-api-access-mgjj5\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361530 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-dns-svc\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-config\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361603 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361655 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361704 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfnxx\" (UniqueName: \"kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361722 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361736 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.370230 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.371059 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.374806 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.375273 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.395444 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.407099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfnxx\" (UniqueName: \"kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.407953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.449297 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464199 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjj5\" (UniqueName: \"kubernetes.io/projected/5a695c51-4390-4957-8320-d381011ebcf9-kube-api-access-mgjj5\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464250 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-dns-svc\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-config\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464408 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.465434 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.466415 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-dns-svc\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.467001 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.467303 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.467767 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-config\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.505928 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjj5\" (UniqueName: \"kubernetes.io/projected/5a695c51-4390-4957-8320-d381011ebcf9-kube-api-access-mgjj5\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.542197 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.584254 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.587625 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.599730 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.631881 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.672945 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.673057 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.673103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.673131 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.674765 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.675400 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxp5w\" (UniqueName: \"kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.675464 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.779406 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.779891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxp5w\" (UniqueName: \"kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.779936 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.779962 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.780007 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.780035 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.780064 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.788590 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.789016 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.791252 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.795456 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.807281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.807728 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.842597 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxp5w\" (UniqueName: \"kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.954301 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.238863 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.351494 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c846ff5b9-256zk"] Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.393881 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerStarted","Data":"f75c581e3b55e98434399a150d4182397e630133bcaac9f87befaf60d17b8e5d"} Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.394631 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" event={"ID":"5a695c51-4390-4957-8320-d381011ebcf9","Type":"ContainerStarted","Data":"0f7a216cecb0ee0942ca4878f2809e15a4fe22f540df6ff6e5d10b22e9c8b820"} Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.584282 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:26:47 crc kubenswrapper[4739]: W0121 16:26:47.660203 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1275174_b8b7_43a4_9fb9_554f965bb836.slice/crio-87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098 WatchSource:0}: Error finding container 87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098: Status 404 returned error can't find the container with id 87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098 Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.797739 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.462632 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerStarted","Data":"0a354fa2e6e9b63851ef12bc4c021ff1ba8baf5bca769c0a495fc03d87c29a5c"} Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.464590 4739 generic.go:334] "Generic (PLEG): container finished" podID="5a695c51-4390-4957-8320-d381011ebcf9" containerID="182dfafa9dc96e00c8694b51040bc79d31c7041bcc28865de3cdf0097e474ca6" exitCode=0 Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.464641 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" event={"ID":"5a695c51-4390-4957-8320-d381011ebcf9","Type":"ContainerDied","Data":"182dfafa9dc96e00c8694b51040bc79d31c7041bcc28865de3cdf0097e474ca6"} Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.550345 4739 generic.go:334] "Generic (PLEG): container finished" podID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerID="96e0aed8cf8dafdc050761cd871eb0bbaa2165b2bce2a6c7085b85d540e43a1a" exitCode=137 Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.550564 4739 generic.go:334] "Generic (PLEG): container finished" podID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerID="3cca007a7205c23db7a20621871fcdb517f7c1ef6042a29edf87f37e02f186be" exitCode=137 Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.550682 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerDied","Data":"96e0aed8cf8dafdc050761cd871eb0bbaa2165b2bce2a6c7085b85d540e43a1a"} Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.550760 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerDied","Data":"3cca007a7205c23db7a20621871fcdb517f7c1ef6042a29edf87f37e02f186be"} Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.600956 4739 generic.go:334] "Generic (PLEG): container finished" podID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerID="4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1" exitCode=137 Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.601017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerDied","Data":"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1"} Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.629669 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerStarted","Data":"87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.047711 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.149855 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data\") pod \"1900bc2e-e626-481f-89d3-bc738ea4eb09\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.149959 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sml4k\" (UniqueName: \"kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k\") pod \"1900bc2e-e626-481f-89d3-bc738ea4eb09\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.149991 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs\") pod \"1900bc2e-e626-481f-89d3-bc738ea4eb09\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.150066 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key\") pod \"1900bc2e-e626-481f-89d3-bc738ea4eb09\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.150104 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts\") pod \"1900bc2e-e626-481f-89d3-bc738ea4eb09\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.150966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs" (OuterVolumeSpecName: "logs") pod "1900bc2e-e626-481f-89d3-bc738ea4eb09" (UID: "1900bc2e-e626-481f-89d3-bc738ea4eb09"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.161662 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1900bc2e-e626-481f-89d3-bc738ea4eb09" (UID: "1900bc2e-e626-481f-89d3-bc738ea4eb09"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.162058 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k" (OuterVolumeSpecName: "kube-api-access-sml4k") pod "1900bc2e-e626-481f-89d3-bc738ea4eb09" (UID: "1900bc2e-e626-481f-89d3-bc738ea4eb09"). InnerVolumeSpecName "kube-api-access-sml4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.196960 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data" (OuterVolumeSpecName: "config-data") pod "1900bc2e-e626-481f-89d3-bc738ea4eb09" (UID: "1900bc2e-e626-481f-89d3-bc738ea4eb09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.227379 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts" (OuterVolumeSpecName: "scripts") pod "1900bc2e-e626-481f-89d3-bc738ea4eb09" (UID: "1900bc2e-e626-481f-89d3-bc738ea4eb09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.252982 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sml4k\" (UniqueName: \"kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.253020 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.253033 4739 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.253044 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.253053 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.545128 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.660393 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs\") pod \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.660500 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key\") pod \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.660540 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data\") pod \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.660936 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs" (OuterVolumeSpecName: "logs") pod "b968f9c5-ea86-4b94-889c-09ae80dc22ea" (UID: "b968f9c5-ea86-4b94-889c-09ae80dc22ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.661051 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrfk\" (UniqueName: \"kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk\") pod \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.661716 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts\") pod \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.663104 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.672777 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" event={"ID":"5a695c51-4390-4957-8320-d381011ebcf9","Type":"ContainerStarted","Data":"1dc3fa546e6a0b5af2c19b2c01ff15cb1e5cd41bda2311744a00005cc41cb70d"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.673227 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk" (OuterVolumeSpecName: "kube-api-access-jbrfk") pod "b968f9c5-ea86-4b94-889c-09ae80dc22ea" (UID: "b968f9c5-ea86-4b94-889c-09ae80dc22ea"). InnerVolumeSpecName "kube-api-access-jbrfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.673347 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.683909 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b968f9c5-ea86-4b94-889c-09ae80dc22ea" (UID: "b968f9c5-ea86-4b94-889c-09ae80dc22ea"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.720237 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" podStartSLOduration=3.720212815 podStartE2EDuration="3.720212815s" podCreationTimestamp="2026-01-21 16:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:26:49.70662991 +0000 UTC m=+3641.397336184" watchObservedRunningTime="2026-01-21 16:26:49.720212815 +0000 UTC m=+3641.410919089" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.725297 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerDied","Data":"6627beb33e730052161bb8f0dd30957c352f5182692e6c72b468019f36bee33c"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.725346 4739 scope.go:117] "RemoveContainer" containerID="96e0aed8cf8dafdc050761cd871eb0bbaa2165b2bce2a6c7085b85d540e43a1a" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.725468 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.763869 4739 generic.go:334] "Generic (PLEG): container finished" podID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerID="ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43" exitCode=137 Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.763934 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerDied","Data":"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.763960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerDied","Data":"09d021b9095469c9cc5cc8c1c0c12531dda0c54ca9ac04d3e8bbb5ef23b9e619"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.764019 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.764992 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrfk\" (UniqueName: \"kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.765035 4739 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.815105 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts" (OuterVolumeSpecName: "scripts") pod "b968f9c5-ea86-4b94-889c-09ae80dc22ea" (UID: "b968f9c5-ea86-4b94-889c-09ae80dc22ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.826181 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerStarted","Data":"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.829597 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data" (OuterVolumeSpecName: "config-data") pod "b968f9c5-ea86-4b94-889c-09ae80dc22ea" (UID: "b968f9c5-ea86-4b94-889c-09ae80dc22ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.852503 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.875728 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.875759 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.881173 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.185895 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.199848 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.230293 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.259351 4739 scope.go:117] "RemoveContainer" containerID="3cca007a7205c23db7a20621871fcdb517f7c1ef6042a29edf87f37e02f186be" Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.536020 4739 scope.go:117] "RemoveContainer" containerID="ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43" Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.811434 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" path="/var/lib/kubelet/pods/1900bc2e-e626-481f-89d3-bc738ea4eb09/volumes" Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.812213 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" path="/var/lib/kubelet/pods/b968f9c5-ea86-4b94-889c-09ae80dc22ea/volumes" Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.874079 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerStarted","Data":"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152"} Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.874266 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api-log" containerID="cri-o://1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" gracePeriod=30 Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.874496 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.874738 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api" containerID="cri-o://489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" gracePeriod=30 Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.890540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerStarted","Data":"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62"} Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.927263 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.927231252 podStartE2EDuration="4.927231252s" podCreationTimestamp="2026-01-21 16:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:26:50.90367711 +0000 UTC m=+3642.594383374" watchObservedRunningTime="2026-01-21 16:26:50.927231252 +0000 UTC m=+3642.617937516" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.028297 4739 scope.go:117] "RemoveContainer" containerID="4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.165237 4739 scope.go:117] "RemoveContainer" containerID="ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43" Jan 21 16:26:51 crc kubenswrapper[4739]: E0121 16:26:51.166293 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43\": container with ID starting with ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43 not found: ID does not exist" containerID="ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.166319 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43"} err="failed to get container status \"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43\": rpc error: code = NotFound desc = could not find container \"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43\": container with ID starting with ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43 not found: ID does not exist" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.166339 4739 scope.go:117] "RemoveContainer" containerID="4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1" Jan 21 16:26:51 crc kubenswrapper[4739]: E0121 16:26:51.192332 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1\": container with ID starting with 4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1 not found: ID does not exist" containerID="4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.192372 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1"} err="failed to get container status \"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1\": rpc error: code = NotFound desc = could not find container \"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1\": container with ID starting with 4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1 not found: ID does not exist" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.575242 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.716056 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-97dd88d6d-7bgrq" podUID="cdecd60b-660a-4039-a35b-29fec73c85a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.733357 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.841859 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842015 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxp5w\" (UniqueName: \"kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842097 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842128 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842164 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842248 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842273 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842763 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs" (OuterVolumeSpecName: "logs") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.843481 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.844149 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.879637 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w" (OuterVolumeSpecName: "kube-api-access-qxp5w") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "kube-api-access-qxp5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.881328 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts" (OuterVolumeSpecName: "scripts") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.882773 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.902013 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916190 4739 generic.go:334] "Generic (PLEG): container finished" podID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerID="489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" exitCode=143 Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916227 4739 generic.go:334] "Generic (PLEG): container finished" podID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerID="1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" exitCode=143 Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916280 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerDied","Data":"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152"} Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916312 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerDied","Data":"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59"} Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerDied","Data":"0a354fa2e6e9b63851ef12bc4c021ff1ba8baf5bca769c0a495fc03d87c29a5c"} Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916346 4739 scope.go:117] "RemoveContainer" containerID="489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916472 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.945448 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxp5w\" (UniqueName: \"kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.945477 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.945485 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.945493 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.945504 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.960092 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerStarted","Data":"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815"} Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.998946 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=5.041857425 podStartE2EDuration="5.998924501s" podCreationTimestamp="2026-01-21 16:26:46 +0000 UTC" firstStartedPulling="2026-01-21 16:26:47.267460759 +0000 UTC m=+3638.958167023" lastFinishedPulling="2026-01-21 16:26:48.224527835 +0000 UTC m=+3639.915234099" observedRunningTime="2026-01-21 16:26:51.980734672 +0000 UTC m=+3643.671440936" watchObservedRunningTime="2026-01-21 16:26:51.998924501 +0000 UTC m=+3643.689630765" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.034042 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data" (OuterVolumeSpecName: "config-data") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.046771 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.094328 4739 scope.go:117] "RemoveContainer" containerID="1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.123229 4739 scope.go:117] "RemoveContainer" containerID="489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.126180 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152\": container with ID starting with 489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152 not found: ID does not exist" containerID="489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.126236 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152"} err="failed to get container status \"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152\": rpc error: code = NotFound desc = could not find container \"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152\": container with ID starting with 489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152 not found: ID does not exist" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.126291 4739 scope.go:117] "RemoveContainer" containerID="1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.129448 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59\": container with ID starting with 1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59 not found: ID does not exist" containerID="1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.129505 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59"} err="failed to get container status \"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59\": rpc error: code = NotFound desc = could not find container \"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59\": container with ID starting with 1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59 not found: ID does not exist" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.129585 4739 scope.go:117] "RemoveContainer" containerID="489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.131096 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152"} err="failed to get container status \"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152\": rpc error: code = NotFound desc = could not find container \"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152\": container with ID starting with 489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152 not found: ID does not exist" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.131129 4739 scope.go:117] "RemoveContainer" containerID="1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.134481 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59"} err="failed to get container status \"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59\": rpc error: code = NotFound desc = could not find container \"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59\": container with ID starting with 1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59 not found: ID does not exist" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.259407 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.270399 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283335 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283706 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283723 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283740 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283746 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283762 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283767 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api-log" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283786 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283792 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283799 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283805 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283829 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283835 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284037 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284052 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284063 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284074 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284087 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284098 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.285167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.292934 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.293135 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.293262 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.442685 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.457439 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458193 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d033dc1-1e44-4e90-8d00-371620e1d520-logs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458241 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data-custom\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458469 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-public-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvrlj\" (UniqueName: \"kubernetes.io/projected/1d033dc1-1e44-4e90-8d00-371620e1d520-kube-api-access-zvrlj\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458879 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d033dc1-1e44-4e90-8d00-371620e1d520-etc-machine-id\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.459013 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.459233 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-scripts\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.561885 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.561964 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d033dc1-1e44-4e90-8d00-371620e1d520-logs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562001 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data-custom\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-public-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562085 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562121 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvrlj\" (UniqueName: \"kubernetes.io/projected/1d033dc1-1e44-4e90-8d00-371620e1d520-kube-api-access-zvrlj\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562175 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d033dc1-1e44-4e90-8d00-371620e1d520-etc-machine-id\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562277 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-scripts\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562562 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d033dc1-1e44-4e90-8d00-371620e1d520-etc-machine-id\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.563481 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d033dc1-1e44-4e90-8d00-371620e1d520-logs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.571300 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.571310 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-public-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.571565 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.571912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-scripts\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.572556 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.578294 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data-custom\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.584002 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvrlj\" (UniqueName: \"kubernetes.io/projected/1d033dc1-1e44-4e90-8d00-371620e1d520-kube-api-access-zvrlj\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.617130 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.827757 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" path="/var/lib/kubelet/pods/33dda5a7-7f30-4550-8f80-9d3a5260e79d/volumes" Jan 21 16:26:53 crc kubenswrapper[4739]: I0121 16:26:53.284357 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:53 crc kubenswrapper[4739]: W0121 16:26:53.292682 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d033dc1_1e44_4e90_8d00_371620e1d520.slice/crio-af25d8ee6c04d7924735e87db8ce1c4c229fe0b0b1c28369fadf72294ca7f8ea WatchSource:0}: Error finding container af25d8ee6c04d7924735e87db8ce1c4c229fe0b0b1c28369fadf72294ca7f8ea: Status 404 returned error can't find the container with id af25d8ee6c04d7924735e87db8ce1c4c229fe0b0b1c28369fadf72294ca7f8ea Jan 21 16:26:54 crc kubenswrapper[4739]: I0121 16:26:54.020954 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1d033dc1-1e44-4e90-8d00-371620e1d520","Type":"ContainerStarted","Data":"ccfff194f9b1d368769066fe1fa89d0208ad7c1da29879296e6f3ad8267221d8"} Jan 21 16:26:54 crc kubenswrapper[4739]: I0121 16:26:54.021282 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1d033dc1-1e44-4e90-8d00-371620e1d520","Type":"ContainerStarted","Data":"af25d8ee6c04d7924735e87db8ce1c4c229fe0b0b1c28369fadf72294ca7f8ea"} Jan 21 16:26:55 crc kubenswrapper[4739]: I0121 16:26:55.046901 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1d033dc1-1e44-4e90-8d00-371620e1d520","Type":"ContainerStarted","Data":"e80ed27c84bd4a7a6efd542f62709cd7d45ece8418d40b825a400d419599b6d9"} Jan 21 16:26:55 crc kubenswrapper[4739]: I0121 16:26:55.047746 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 21 16:26:55 crc kubenswrapper[4739]: I0121 16:26:55.078896 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.078877344 podStartE2EDuration="3.078877344s" podCreationTimestamp="2026-01-21 16:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:26:55.067109107 +0000 UTC m=+3646.757815381" watchObservedRunningTime="2026-01-21 16:26:55.078877344 +0000 UTC m=+3646.769583608" Jan 21 16:26:56 crc kubenswrapper[4739]: I0121 16:26:56.449937 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 21 16:26:56 crc kubenswrapper[4739]: I0121 16:26:56.543917 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:56 crc kubenswrapper[4739]: I0121 16:26:56.621536 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 16:26:56 crc kubenswrapper[4739]: I0121 16:26:56.621801 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="dnsmasq-dns" containerID="cri-o://b27ed62b7c32459024ab3fd4b53954e10ea5e93107d757fa3a9ea1ab2333c61c" gracePeriod=10 Jan 21 16:26:57 crc kubenswrapper[4739]: I0121 16:26:57.072937 4739 generic.go:334] "Generic (PLEG): container finished" podID="c7eae90b-f949-4872-a985-1066d94b337a" containerID="b27ed62b7c32459024ab3fd4b53954e10ea5e93107d757fa3a9ea1ab2333c61c" exitCode=0 Jan 21 16:26:57 crc kubenswrapper[4739]: I0121 16:26:57.072996 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerDied","Data":"b27ed62b7c32459024ab3fd4b53954e10ea5e93107d757fa3a9ea1ab2333c61c"} Jan 21 16:26:59 crc kubenswrapper[4739]: I0121 16:26:59.395435 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:26:59 crc kubenswrapper[4739]: I0121 16:26:59.436135 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:27:01 crc kubenswrapper[4739]: I0121 16:27:01.532781 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:01 crc kubenswrapper[4739]: I0121 16:27:01.533562 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-central-agent" containerID="cri-o://876cbddd5fc03b020086847b4d92b2e6d878f8b5e977dd1407bb43ca45f01f19" gracePeriod=30 Jan 21 16:27:01 crc kubenswrapper[4739]: I0121 16:27:01.533683 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="proxy-httpd" containerID="cri-o://abaf40f5e7ace765139228e6b9ad159379494a1bbf0e44bd88cc9ac3505e055b" gracePeriod=30 Jan 21 16:27:01 crc kubenswrapper[4739]: I0121 16:27:01.533726 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="sg-core" containerID="cri-o://4282a0c29310a59e84c7e358330e258ba173b28bd69c26c905f25c5968f4e355" gracePeriod=30 Jan 21 16:27:01 crc kubenswrapper[4739]: I0121 16:27:01.533755 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-notification-agent" containerID="cri-o://e00a1e5cf4a228c6ad77c9cd9bfc25406ae0a248121747af33bae66aea97abc9" gracePeriod=30 Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.133414 4739 generic.go:334] "Generic (PLEG): container finished" podID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerID="abaf40f5e7ace765139228e6b9ad159379494a1bbf0e44bd88cc9ac3505e055b" exitCode=0 Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.133443 4739 generic.go:334] "Generic (PLEG): container finished" podID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerID="4282a0c29310a59e84c7e358330e258ba173b28bd69c26c905f25c5968f4e355" exitCode=2 Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.133770 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerDied","Data":"abaf40f5e7ace765139228e6b9ad159379494a1bbf0e44bd88cc9ac3505e055b"} Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.133850 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerDied","Data":"4282a0c29310a59e84c7e358330e258ba173b28bd69c26c905f25c5968f4e355"} Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.135412 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerDied","Data":"f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6"} Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.135435 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.272415 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.403796 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.403864 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.404004 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgjm4\" (UniqueName: \"kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.404022 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.404061 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.404272 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.417030 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4" (OuterVolumeSpecName: "kube-api-access-vgjm4") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "kube-api-access-vgjm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.510976 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgjm4\" (UniqueName: \"kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.533706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.566553 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.569382 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config" (OuterVolumeSpecName: "config") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.574118 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.593212 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.619340 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.619375 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.619387 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.619397 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.619409 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.159219 4739 generic.go:334] "Generic (PLEG): container finished" podID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerID="e00a1e5cf4a228c6ad77c9cd9bfc25406ae0a248121747af33bae66aea97abc9" exitCode=0 Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.159699 4739 generic.go:334] "Generic (PLEG): container finished" podID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerID="876cbddd5fc03b020086847b4d92b2e6d878f8b5e977dd1407bb43ca45f01f19" exitCode=0 Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.159782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerDied","Data":"e00a1e5cf4a228c6ad77c9cd9bfc25406ae0a248121747af33bae66aea97abc9"} Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.159835 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerDied","Data":"876cbddd5fc03b020086847b4d92b2e6d878f8b5e977dd1407bb43ca45f01f19"} Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.166902 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.167676 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerStarted","Data":"adfd55d830285bbc54a0003f127db496cdf065c941cf8f5b8afc466c9690516f"} Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.180164 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.214353 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.241941 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.355906 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444336 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444424 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444627 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444706 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444741 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82r4q\" (UniqueName: \"kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444788 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444881 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.448001 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.448352 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.455488 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.463023 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts" (OuterVolumeSpecName: "scripts") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.549560 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.549591 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.549600 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.603152 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q" (OuterVolumeSpecName: "kube-api-access-82r4q") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "kube-api-access-82r4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.649146 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.651515 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82r4q\" (UniqueName: \"kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.688441 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.709102 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.720539 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.753277 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.753522 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.753532 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.854957 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data" (OuterVolumeSpecName: "config-data") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.857708 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.113838 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.195:5353: i/o timeout" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.177209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerDied","Data":"36aa7880cb3efdd81f077898386b6f0c22b7627de77903bb5ba78e63817f32fc"} Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.177260 4739 scope.go:117] "RemoveContainer" containerID="abaf40f5e7ace765139228e6b9ad159379494a1bbf0e44bd88cc9ac3505e055b" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.177297 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.179686 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon-log" containerID="cri-o://b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6" gracePeriod=30 Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.180781 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerStarted","Data":"130ecc6c4407d5cab6945f40930d87f638a29a0cda22143abf160045575717b4"} Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.180855 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" containerID="cri-o://1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11" gracePeriod=30 Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.202545 4739 scope.go:117] "RemoveContainer" containerID="4282a0c29310a59e84c7e358330e258ba173b28bd69c26c905f25c5968f4e355" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.214876 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.812859737 podStartE2EDuration="18.214856246s" podCreationTimestamp="2026-01-21 16:26:46 +0000 UTC" firstStartedPulling="2026-01-21 16:26:47.663792497 +0000 UTC m=+3639.354498761" lastFinishedPulling="2026-01-21 16:27:02.065789006 +0000 UTC m=+3653.756495270" observedRunningTime="2026-01-21 16:27:04.211069494 +0000 UTC m=+3655.901775758" watchObservedRunningTime="2026-01-21 16:27:04.214856246 +0000 UTC m=+3655.905562510" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.236320 4739 scope.go:117] "RemoveContainer" containerID="e00a1e5cf4a228c6ad77c9cd9bfc25406ae0a248121747af33bae66aea97abc9" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.259413 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.261060 4739 scope.go:117] "RemoveContainer" containerID="876cbddd5fc03b020086847b4d92b2e6d878f8b5e977dd1407bb43ca45f01f19" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.271096 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.286762 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292250 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-central-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292284 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-central-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292302 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-notification-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292309 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-notification-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292319 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="dnsmasq-dns" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292324 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="dnsmasq-dns" Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292334 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="init" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292340 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="init" Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292351 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="proxy-httpd" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292357 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="proxy-httpd" Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292380 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="sg-core" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292386 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="sg-core" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292545 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-notification-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292560 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-central-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292572 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="sg-core" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292581 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="proxy-httpd" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292591 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="dnsmasq-dns" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.294399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.298149 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.298234 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.298259 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.310575 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369299 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwzd4\" (UniqueName: \"kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369356 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369454 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369516 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369609 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369657 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471449 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471564 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwzd4\" (UniqueName: \"kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471593 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471640 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471680 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471723 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471741 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.472307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.472378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.479572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.479707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.481843 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.482449 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.492922 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.510039 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwzd4\" (UniqueName: \"kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.620135 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.812351 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7eae90b-f949-4872-a985-1066d94b337a" path="/var/lib/kubelet/pods/c7eae90b-f949-4872-a985-1066d94b337a/volumes" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.828966 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" path="/var/lib/kubelet/pods/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925/volumes" Jan 21 16:27:05 crc kubenswrapper[4739]: I0121 16:27:05.222715 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:27:05 crc kubenswrapper[4739]: I0121 16:27:05.223102 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:27:05 crc kubenswrapper[4739]: I0121 16:27:05.267270 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:05 crc kubenswrapper[4739]: W0121 16:27:05.270520 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod044b152f_3b3e_4948_a0bd_7b4f3732770f.slice/crio-c9c4a115d260482bdd0dc56fdde998b7ac262cbfad2a06083c9699cc0ee32fee WatchSource:0}: Error finding container c9c4a115d260482bdd0dc56fdde998b7ac262cbfad2a06083c9699cc0ee32fee: Status 404 returned error can't find the container with id c9c4a115d260482bdd0dc56fdde998b7ac262cbfad2a06083c9699cc0ee32fee Jan 21 16:27:06 crc kubenswrapper[4739]: I0121 16:27:06.069071 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:06 crc kubenswrapper[4739]: I0121 16:27:06.224230 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerStarted","Data":"c9c4a115d260482bdd0dc56fdde998b7ac262cbfad2a06083c9699cc0ee32fee"} Jan 21 16:27:06 crc kubenswrapper[4739]: I0121 16:27:06.376736 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 21 16:27:07 crc kubenswrapper[4739]: I0121 16:27:07.256965 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerStarted","Data":"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb"} Jan 21 16:27:07 crc kubenswrapper[4739]: I0121 16:27:07.383123 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:35128->10.217.0.246:8443: read: connection reset by peer" Jan 21 16:27:08 crc kubenswrapper[4739]: I0121 16:27:08.269957 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerStarted","Data":"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281"} Jan 21 16:27:08 crc kubenswrapper[4739]: I0121 16:27:08.276578 4739 generic.go:334] "Generic (PLEG): container finished" podID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerID="1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11" exitCode=0 Jan 21 16:27:08 crc kubenswrapper[4739]: I0121 16:27:08.276620 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerDied","Data":"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11"} Jan 21 16:27:08 crc kubenswrapper[4739]: I0121 16:27:08.745638 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 21 16:27:08 crc kubenswrapper[4739]: I0121 16:27:08.843366 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:09 crc kubenswrapper[4739]: I0121 16:27:09.287194 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerStarted","Data":"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a"} Jan 21 16:27:09 crc kubenswrapper[4739]: I0121 16:27:09.287399 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="manila-scheduler" containerID="cri-o://a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62" gracePeriod=30 Jan 21 16:27:09 crc kubenswrapper[4739]: I0121 16:27:09.287509 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="probe" containerID="cri-o://d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815" gracePeriod=30 Jan 21 16:27:10 crc kubenswrapper[4739]: I0121 16:27:10.305439 4739 generic.go:334] "Generic (PLEG): container finished" podID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerID="d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815" exitCode=0 Jan 21 16:27:10 crc kubenswrapper[4739]: I0121 16:27:10.305487 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerDied","Data":"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815"} Jan 21 16:27:11 crc kubenswrapper[4739]: E0121 16:27:11.631019 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod160f61f3_f501_4220_ba9c_6db0fb397da9.slice/crio-conmon-a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod160f61f3_f501_4220_ba9c_6db0fb397da9.slice/crio-a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.806078 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847382 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847535 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847662 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfnxx\" (UniqueName: \"kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847716 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847747 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.859370 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.868975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts" (OuterVolumeSpecName: "scripts") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.881199 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx" (OuterVolumeSpecName: "kube-api-access-jfnxx") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "kube-api-access-jfnxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.883853 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.921312 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.955290 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.955320 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.955332 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfnxx\" (UniqueName: \"kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.955342 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.955351 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.002950 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data" (OuterVolumeSpecName: "config-data") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.057220 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.327111 4739 generic.go:334] "Generic (PLEG): container finished" podID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerID="a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62" exitCode=0 Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.327164 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerDied","Data":"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62"} Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.327190 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerDied","Data":"f75c581e3b55e98434399a150d4182397e630133bcaac9f87befaf60d17b8e5d"} Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.327206 4739 scope.go:117] "RemoveContainer" containerID="d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.327325 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.334682 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerStarted","Data":"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c"} Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.334854 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-central-agent" containerID="cri-o://1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb" gracePeriod=30 Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.334956 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.335079 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="proxy-httpd" containerID="cri-o://fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c" gracePeriod=30 Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.335195 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-notification-agent" containerID="cri-o://8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281" gracePeriod=30 Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.335261 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="sg-core" containerID="cri-o://12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a" gracePeriod=30 Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.375376 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.402582205 podStartE2EDuration="8.375343197s" podCreationTimestamp="2026-01-21 16:27:04 +0000 UTC" firstStartedPulling="2026-01-21 16:27:05.273614587 +0000 UTC m=+3656.964320851" lastFinishedPulling="2026-01-21 16:27:11.246375579 +0000 UTC m=+3662.937081843" observedRunningTime="2026-01-21 16:27:12.36168493 +0000 UTC m=+3664.052391194" watchObservedRunningTime="2026-01-21 16:27:12.375343197 +0000 UTC m=+3664.066049461" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.391568 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.406045 4739 scope.go:117] "RemoveContainer" containerID="a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.409469 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.425149 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:12 crc kubenswrapper[4739]: E0121 16:27:12.425538 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="probe" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.425553 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="probe" Jan 21 16:27:12 crc kubenswrapper[4739]: E0121 16:27:12.425567 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="manila-scheduler" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.425575 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="manila-scheduler" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.425730 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="manila-scheduler" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.425751 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="probe" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.426973 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.431916 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.435969 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.463875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.463971 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.464021 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-scripts\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.464074 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j57kp\" (UniqueName: \"kubernetes.io/projected/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-kube-api-access-j57kp\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.464099 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.464124 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.481249 4739 scope.go:117] "RemoveContainer" containerID="d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815" Jan 21 16:27:12 crc kubenswrapper[4739]: E0121 16:27:12.482262 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815\": container with ID starting with d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815 not found: ID does not exist" containerID="d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.482300 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815"} err="failed to get container status \"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815\": rpc error: code = NotFound desc = could not find container \"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815\": container with ID starting with d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815 not found: ID does not exist" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.482328 4739 scope.go:117] "RemoveContainer" containerID="a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62" Jan 21 16:27:12 crc kubenswrapper[4739]: E0121 16:27:12.486451 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62\": container with ID starting with a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62 not found: ID does not exist" containerID="a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.486481 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62"} err="failed to get container status \"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62\": rpc error: code = NotFound desc = could not find container \"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62\": container with ID starting with a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62 not found: ID does not exist" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.566628 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.566705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.566740 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.566763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.566972 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.567054 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-scripts\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.567182 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j57kp\" (UniqueName: \"kubernetes.io/projected/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-kube-api-access-j57kp\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.571658 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-scripts\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.572251 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.573250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.584014 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.584881 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j57kp\" (UniqueName: \"kubernetes.io/projected/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-kube-api-access-j57kp\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.764530 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.799116 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" path="/var/lib/kubelet/pods/160f61f3-f501-4220-ba9c-6db0fb397da9/volumes" Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.315640 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.386992 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"95d74824-f3a9-4fbb-8ca6-1299ef8f7153","Type":"ContainerStarted","Data":"5f6fa1ce0a6af88aa767ecaf1028b3de06fd42f2c9b0b6eea2bd8b8488f5c6e6"} Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.400419 4739 generic.go:334] "Generic (PLEG): container finished" podID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerID="fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c" exitCode=0 Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.400656 4739 generic.go:334] "Generic (PLEG): container finished" podID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerID="12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a" exitCode=2 Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.400747 4739 generic.go:334] "Generic (PLEG): container finished" podID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerID="8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281" exitCode=0 Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.400998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerDied","Data":"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c"} Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.401051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerDied","Data":"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a"} Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.401065 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerDied","Data":"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281"} Jan 21 16:27:14 crc kubenswrapper[4739]: I0121 16:27:14.411866 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"95d74824-f3a9-4fbb-8ca6-1299ef8f7153","Type":"ContainerStarted","Data":"5e5ce3666efd05e2490599bad8155663c1e1bc583689ccfbb42c8d20c5f8c3fc"} Jan 21 16:27:14 crc kubenswrapper[4739]: I0121 16:27:14.412636 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"95d74824-f3a9-4fbb-8ca6-1299ef8f7153","Type":"ContainerStarted","Data":"7d3d94241b3de07635e140c9b9c9f58f7eb3cc85da92b004cfcaab7f81eae552"} Jan 21 16:27:14 crc kubenswrapper[4739]: I0121 16:27:14.536444 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 21 16:27:14 crc kubenswrapper[4739]: I0121 16:27:14.590871 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.590846383 podStartE2EDuration="2.590846383s" podCreationTimestamp="2026-01-21 16:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:27:14.446179564 +0000 UTC m=+3666.136885838" watchObservedRunningTime="2026-01-21 16:27:14.590846383 +0000 UTC m=+3666.281552657" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.041902 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.182553 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.182870 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.182906 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.182984 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.183060 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.183118 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.183180 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.183264 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwzd4\" (UniqueName: \"kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.183722 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.185654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.202761 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4" (OuterVolumeSpecName: "kube-api-access-qwzd4") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "kube-api-access-qwzd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.202952 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts" (OuterVolumeSpecName: "scripts") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.252940 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.286133 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.286158 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.286170 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.286178 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.286186 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwzd4\" (UniqueName: \"kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.293172 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.306957 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.335512 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data" (OuterVolumeSpecName: "config-data") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.387908 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.387942 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.387951 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.424076 4739 generic.go:334] "Generic (PLEG): container finished" podID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerID="1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb" exitCode=0 Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.424239 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.424976 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerDied","Data":"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb"} Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.425011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerDied","Data":"c9c4a115d260482bdd0dc56fdde998b7ac262cbfad2a06083c9699cc0ee32fee"} Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.425028 4739 scope.go:117] "RemoveContainer" containerID="fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.466973 4739 scope.go:117] "RemoveContainer" containerID="12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.474578 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.511518 4739 scope.go:117] "RemoveContainer" containerID="8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.532098 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.541660 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.542324 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="sg-core" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.542400 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="sg-core" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.542473 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-notification-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.542545 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-notification-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.542623 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-central-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.542686 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-central-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.542786 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="proxy-httpd" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.542880 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="proxy-httpd" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.543143 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-notification-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.543229 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="proxy-httpd" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.543309 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="sg-core" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.543464 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-central-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.545513 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.550450 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.559086 4739 scope.go:117] "RemoveContainer" containerID="1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.559295 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.559492 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.559728 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.583026 4739 scope.go:117] "RemoveContainer" containerID="fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.583457 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c\": container with ID starting with fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c not found: ID does not exist" containerID="fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.583514 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c"} err="failed to get container status \"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c\": rpc error: code = NotFound desc = could not find container \"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c\": container with ID starting with fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c not found: ID does not exist" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.583533 4739 scope.go:117] "RemoveContainer" containerID="12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.583871 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a\": container with ID starting with 12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a not found: ID does not exist" containerID="12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.583910 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a"} err="failed to get container status \"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a\": rpc error: code = NotFound desc = could not find container \"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a\": container with ID starting with 12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a not found: ID does not exist" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.583936 4739 scope.go:117] "RemoveContainer" containerID="8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.584215 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281\": container with ID starting with 8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281 not found: ID does not exist" containerID="8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.584246 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281"} err="failed to get container status \"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281\": rpc error: code = NotFound desc = could not find container \"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281\": container with ID starting with 8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281 not found: ID does not exist" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.584266 4739 scope.go:117] "RemoveContainer" containerID="1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.584453 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb\": container with ID starting with 1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb not found: ID does not exist" containerID="1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.584485 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb"} err="failed to get container status \"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb\": rpc error: code = NotFound desc = could not find container \"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb\": container with ID starting with 1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb not found: ID does not exist" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700246 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-log-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700324 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-run-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700403 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700464 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-config-data\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-scripts\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700710 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpznp\" (UniqueName: \"kubernetes.io/projected/f2fec0ae-aaf7-434d-b425-7b3321505810-kube-api-access-bpznp\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.802919 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-log-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.803447 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-run-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.803401 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-log-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.803530 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.803997 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-run-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.804022 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.804080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-config-data\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.804165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.804270 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-scripts\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.804329 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpznp\" (UniqueName: \"kubernetes.io/projected/f2fec0ae-aaf7-434d-b425-7b3321505810-kube-api-access-bpznp\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.807529 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.807539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.809070 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.810669 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-config-data\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.813438 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-scripts\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.832752 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpznp\" (UniqueName: \"kubernetes.io/projected/f2fec0ae-aaf7-434d-b425-7b3321505810-kube-api-access-bpznp\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.884237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:16 crc kubenswrapper[4739]: I0121 16:27:16.451669 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:16 crc kubenswrapper[4739]: I0121 16:27:16.555660 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Jan 21 16:27:16 crc kubenswrapper[4739]: I0121 16:27:16.792304 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" path="/var/lib/kubelet/pods/044b152f-3b3e-4948-a0bd-7b4f3732770f/volumes" Jan 21 16:27:17 crc kubenswrapper[4739]: I0121 16:27:17.457258 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"1c2efdd25b4fc7c52916fc8029d7f325a7d914c4bfb654d1b9710dbcbac680c7"} Jan 21 16:27:18 crc kubenswrapper[4739]: I0121 16:27:18.147175 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 21 16:27:18 crc kubenswrapper[4739]: I0121 16:27:18.194853 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:18 crc kubenswrapper[4739]: I0121 16:27:18.470443 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="manila-share" containerID="cri-o://adfd55d830285bbc54a0003f127db496cdf065c941cf8f5b8afc466c9690516f" gracePeriod=30 Jan 21 16:27:18 crc kubenswrapper[4739]: I0121 16:27:18.470758 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"53eb7d2ca4bf2fefedf895ea605de95eada7673c834fe978db27d5fcf406b002"} Jan 21 16:27:18 crc kubenswrapper[4739]: I0121 16:27:18.471135 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="probe" containerID="cri-o://130ecc6c4407d5cab6945f40930d87f638a29a0cda22143abf160045575717b4" gracePeriod=30 Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.493114 4739 generic.go:334] "Generic (PLEG): container finished" podID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerID="130ecc6c4407d5cab6945f40930d87f638a29a0cda22143abf160045575717b4" exitCode=0 Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.494598 4739 generic.go:334] "Generic (PLEG): container finished" podID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerID="adfd55d830285bbc54a0003f127db496cdf065c941cf8f5b8afc466c9690516f" exitCode=1 Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.493311 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerDied","Data":"130ecc6c4407d5cab6945f40930d87f638a29a0cda22143abf160045575717b4"} Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.494714 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerDied","Data":"adfd55d830285bbc54a0003f127db496cdf065c941cf8f5b8afc466c9690516f"} Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.494744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerDied","Data":"87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098"} Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.494753 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.498326 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"5c2c8c6352aa09eb23a8a4e225553a4bb91ca409836c5b1c4a22f635ee0a8a6d"} Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.511786 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633146 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633305 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrk9c\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633357 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633382 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633494 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633595 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.638842 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.638915 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.647579 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c" (OuterVolumeSpecName: "kube-api-access-jrk9c") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "kube-api-access-jrk9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.650455 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph" (OuterVolumeSpecName: "ceph") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.650594 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.669631 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts" (OuterVolumeSpecName: "scripts") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.736158 4739 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.738431 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.738447 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrk9c\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.738485 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.738499 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.738511 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.752072 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.843752 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.852423 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data" (OuterVolumeSpecName: "config-data") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.945520 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.508538 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"60d2ead798c78244628d928bf17f3b7335ade6203f5ac1e87bb95a0af55257af"} Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.508576 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.551577 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.559754 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.598543 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:21 crc kubenswrapper[4739]: E0121 16:27:21.598975 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="manila-share" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.598995 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="manila-share" Jan 21 16:27:21 crc kubenswrapper[4739]: E0121 16:27:21.599020 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="probe" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.599030 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="probe" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.599249 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="probe" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.599278 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="manila-share" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.600364 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.602628 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.612624 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.771894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.771944 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.771995 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.772011 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.772119 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-ceph\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.772152 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.772236 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlbq6\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-kube-api-access-rlbq6\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.772340 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-scripts\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.873609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-ceph\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.873982 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlbq6\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-kube-api-access-rlbq6\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874130 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-scripts\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874274 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.875532 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.880725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-scripts\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.880864 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.881113 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.881492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.893030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-ceph\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.893690 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.909338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlbq6\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-kube-api-access-rlbq6\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.922599 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:27:22 crc kubenswrapper[4739]: I0121 16:27:22.599373 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:22 crc kubenswrapper[4739]: I0121 16:27:22.765807 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 21 16:27:22 crc kubenswrapper[4739]: I0121 16:27:22.802916 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" path="/var/lib/kubelet/pods/a1275174-b8b7-43a4-9fb9-554f965bb836/volumes" Jan 21 16:27:23 crc kubenswrapper[4739]: I0121 16:27:23.527654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9af8a439-bfea-4aff-a10f-06abe6ed70dd","Type":"ContainerStarted","Data":"661ef844ac2c98f9464862a396f3de96f972af415f1df7963903ba713d1417e6"} Jan 21 16:27:24 crc kubenswrapper[4739]: I0121 16:27:24.541524 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9af8a439-bfea-4aff-a10f-06abe6ed70dd","Type":"ContainerStarted","Data":"cd9edeacb6155c8cd86c2e9a5f5f7c2d82557892927f36ceeeaf12de80a7325f"} Jan 21 16:27:24 crc kubenswrapper[4739]: I0121 16:27:24.542156 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9af8a439-bfea-4aff-a10f-06abe6ed70dd","Type":"ContainerStarted","Data":"7091e2cd119ed0ef89c98d1c1c32d943f9657d73c2d493f1995d8ca0f35b4bc1"} Jan 21 16:27:24 crc kubenswrapper[4739]: I0121 16:27:24.546201 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"340cf28a7f695546a60f72e843f030d3a886fc706d143479b682c2dd5f6bd4af"} Jan 21 16:27:24 crc kubenswrapper[4739]: I0121 16:27:24.546499 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:27:24 crc kubenswrapper[4739]: I0121 16:27:24.577026 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.577004278 podStartE2EDuration="3.577004278s" podCreationTimestamp="2026-01-21 16:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:27:24.565507359 +0000 UTC m=+3676.256213633" watchObservedRunningTime="2026-01-21 16:27:24.577004278 +0000 UTC m=+3676.267710542" Jan 21 16:27:26 crc kubenswrapper[4739]: I0121 16:27:26.555973 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Jan 21 16:27:26 crc kubenswrapper[4739]: I0121 16:27:26.556330 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:27:26 crc kubenswrapper[4739]: I0121 16:27:26.584103 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.432130569 podStartE2EDuration="11.5840848s" podCreationTimestamp="2026-01-21 16:27:15 +0000 UTC" firstStartedPulling="2026-01-21 16:27:16.455727712 +0000 UTC m=+3668.146433986" lastFinishedPulling="2026-01-21 16:27:23.607681953 +0000 UTC m=+3675.298388217" observedRunningTime="2026-01-21 16:27:24.612895584 +0000 UTC m=+3676.303601858" watchObservedRunningTime="2026-01-21 16:27:26.5840848 +0000 UTC m=+3678.274791064" Jan 21 16:27:31 crc kubenswrapper[4739]: I0121 16:27:31.923151 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.452503 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.630522 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.646448 4739 generic.go:334] "Generic (PLEG): container finished" podID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerID="b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6" exitCode=137 Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.646498 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerDied","Data":"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6"} Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.646513 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.646527 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerDied","Data":"1b4e559dfd3f1dad65b69a6216ec778f0f338b9761331fc0616f62380df78ddf"} Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.646547 4739 scope.go:117] "RemoveContainer" containerID="1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.768792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769173 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769282 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769340 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769388 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769408 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs" (OuterVolumeSpecName: "logs") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769423 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mtld\" (UniqueName: \"kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.770601 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.809756 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.814003 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld" (OuterVolumeSpecName: "kube-api-access-6mtld") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "kube-api-access-6mtld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.815328 4739 scope.go:117] "RemoveContainer" containerID="b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.816350 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.834128 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data" (OuterVolumeSpecName: "config-data") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.847732 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.856381 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts" (OuterVolumeSpecName: "scripts") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873449 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873481 4739 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873518 4739 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873532 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mtld\" (UniqueName: \"kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873543 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873555 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.935338 4739 scope.go:117] "RemoveContainer" containerID="1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11" Jan 21 16:27:34 crc kubenswrapper[4739]: E0121 16:27:34.935985 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11\": container with ID starting with 1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11 not found: ID does not exist" containerID="1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.936019 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11"} err="failed to get container status \"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11\": rpc error: code = NotFound desc = could not find container \"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11\": container with ID starting with 1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11 not found: ID does not exist" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.936045 4739 scope.go:117] "RemoveContainer" containerID="b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6" Jan 21 16:27:34 crc kubenswrapper[4739]: E0121 16:27:34.936266 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6\": container with ID starting with b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6 not found: ID does not exist" containerID="b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.936290 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6"} err="failed to get container status \"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6\": rpc error: code = NotFound desc = could not find container \"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6\": container with ID starting with b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6 not found: ID does not exist" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.978700 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.987223 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:27:35 crc kubenswrapper[4739]: I0121 16:27:35.222631 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:27:35 crc kubenswrapper[4739]: I0121 16:27:35.222677 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:27:36 crc kubenswrapper[4739]: I0121 16:27:36.801419 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" path="/var/lib/kubelet/pods/c9d9299c-a9af-44e5-828c-3cc219ce1e22/volumes" Jan 21 16:27:43 crc kubenswrapper[4739]: I0121 16:27:43.712066 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 21 16:27:45 crc kubenswrapper[4739]: I0121 16:27:45.896715 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:27:51 crc kubenswrapper[4739]: I0121 16:27:51.067889 4739 scope.go:117] "RemoveContainer" containerID="b27ed62b7c32459024ab3fd4b53954e10ea5e93107d757fa3a9ea1ab2333c61c" Jan 21 16:27:51 crc kubenswrapper[4739]: I0121 16:27:51.139697 4739 scope.go:117] "RemoveContainer" containerID="1cb06a065f7b359be2df20293554b36493e66c0a9ef2d4e5bc69e0816ccf0cb3" Jan 21 16:28:05 crc kubenswrapper[4739]: I0121 16:28:05.222909 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:28:05 crc kubenswrapper[4739]: I0121 16:28:05.223483 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:28:05 crc kubenswrapper[4739]: I0121 16:28:05.223532 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:28:05 crc kubenswrapper[4739]: I0121 16:28:05.224352 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:28:05 crc kubenswrapper[4739]: I0121 16:28:05.224404 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" gracePeriod=600 Jan 21 16:28:05 crc kubenswrapper[4739]: E0121 16:28:05.352485 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:28:06 crc kubenswrapper[4739]: I0121 16:28:06.108468 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" exitCode=0 Jan 21 16:28:06 crc kubenswrapper[4739]: I0121 16:28:06.108518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27"} Jan 21 16:28:06 crc kubenswrapper[4739]: I0121 16:28:06.108552 4739 scope.go:117] "RemoveContainer" containerID="817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62" Jan 21 16:28:06 crc kubenswrapper[4739]: I0121 16:28:06.109907 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:28:06 crc kubenswrapper[4739]: E0121 16:28:06.111546 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:28:19 crc kubenswrapper[4739]: I0121 16:28:19.783330 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:28:19 crc kubenswrapper[4739]: E0121 16:28:19.784185 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:28:34 crc kubenswrapper[4739]: I0121 16:28:34.783367 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:28:34 crc kubenswrapper[4739]: E0121 16:28:34.784149 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.976354 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:28:38 crc kubenswrapper[4739]: E0121 16:28:38.978118 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon-log" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.978217 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon-log" Jan 21 16:28:38 crc kubenswrapper[4739]: E0121 16:28:38.978356 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.978434 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.978684 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.978758 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon-log" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.979461 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.982855 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.983355 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-c9nsw" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.983654 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.983862 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.984456 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.080896 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.080956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.080993 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081246 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081330 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081470 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75dsx\" (UniqueName: \"kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081764 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081806 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183753 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183838 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183878 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183921 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183956 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183995 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75dsx\" (UniqueName: \"kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184110 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184658 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184916 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184950 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.186168 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.190517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.190677 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.192415 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.201544 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.205112 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75dsx\" (UniqueName: \"kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.219186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.300473 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.766016 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:28:40 crc kubenswrapper[4739]: I0121 16:28:40.413609 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"156e0f25-edfe-462a-ae5f-9f5642bef8bb","Type":"ContainerStarted","Data":"6b7011d1322270b6bb31700f56780b7019d2f7d08e1e0990c87f1bbbc0be3201"} Jan 21 16:28:48 crc kubenswrapper[4739]: I0121 16:28:48.789884 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:28:48 crc kubenswrapper[4739]: E0121 16:28:48.790684 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:29:02 crc kubenswrapper[4739]: I0121 16:29:02.782938 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:29:02 crc kubenswrapper[4739]: E0121 16:29:02.783712 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:29:15 crc kubenswrapper[4739]: I0121 16:29:15.782800 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:29:15 crc kubenswrapper[4739]: E0121 16:29:15.783563 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:29:21 crc kubenswrapper[4739]: E0121 16:29:21.707982 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 21 16:29:21 crc kubenswrapper[4739]: E0121 16:29:21.710639 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75dsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(156e0f25-edfe-462a-ae5f-9f5642bef8bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:29:21 crc kubenswrapper[4739]: E0121 16:29:21.711941 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" Jan 21 16:29:21 crc kubenswrapper[4739]: E0121 16:29:21.813868 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" Jan 21 16:29:29 crc kubenswrapper[4739]: I0121 16:29:29.782973 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:29:29 crc kubenswrapper[4739]: E0121 16:29:29.783848 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:29:36 crc kubenswrapper[4739]: I0121 16:29:36.263564 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 16:29:37 crc kubenswrapper[4739]: I0121 16:29:37.946291 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"156e0f25-edfe-462a-ae5f-9f5642bef8bb","Type":"ContainerStarted","Data":"91264377cc226a97644592a9e3534ea7cfd856051503a1a6f58022fd4258b937"} Jan 21 16:29:37 crc kubenswrapper[4739]: I0121 16:29:37.975554 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.469216574 podStartE2EDuration="1m0.975534954s" podCreationTimestamp="2026-01-21 16:28:37 +0000 UTC" firstStartedPulling="2026-01-21 16:28:39.754880604 +0000 UTC m=+3751.445586868" lastFinishedPulling="2026-01-21 16:29:36.261198984 +0000 UTC m=+3807.951905248" observedRunningTime="2026-01-21 16:29:37.964039275 +0000 UTC m=+3809.654745539" watchObservedRunningTime="2026-01-21 16:29:37.975534954 +0000 UTC m=+3809.666241218" Jan 21 16:29:40 crc kubenswrapper[4739]: I0121 16:29:40.783475 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:29:40 crc kubenswrapper[4739]: E0121 16:29:40.784467 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:29:52 crc kubenswrapper[4739]: I0121 16:29:52.783177 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:29:52 crc kubenswrapper[4739]: E0121 16:29:52.783999 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.202429 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7"] Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.207024 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.219969 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.228474 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.229845 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7"] Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.381013 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrmk\" (UniqueName: \"kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.381261 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.381288 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.483573 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrmk\" (UniqueName: \"kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.483833 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.483861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.485279 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.503497 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.506067 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrmk\" (UniqueName: \"kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.532748 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:01 crc kubenswrapper[4739]: I0121 16:30:01.081543 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7"] Jan 21 16:30:02 crc kubenswrapper[4739]: I0121 16:30:02.165851 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" event={"ID":"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3","Type":"ContainerStarted","Data":"d7c32e456b6af37b07e979bd1271c241f8830b0dd5a00d40e927d91cfb7fa2fa"} Jan 21 16:30:02 crc kubenswrapper[4739]: I0121 16:30:02.167218 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" event={"ID":"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3","Type":"ContainerStarted","Data":"a290350456fae2b9335843e8769389168d81dd0f5bb1c3a249147967b62ec409"} Jan 21 16:30:02 crc kubenswrapper[4739]: I0121 16:30:02.186644 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" podStartSLOduration=2.186623613 podStartE2EDuration="2.186623613s" podCreationTimestamp="2026-01-21 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:30:02.180922488 +0000 UTC m=+3833.871628762" watchObservedRunningTime="2026-01-21 16:30:02.186623613 +0000 UTC m=+3833.877329877" Jan 21 16:30:03 crc kubenswrapper[4739]: I0121 16:30:03.200063 4739 generic.go:334] "Generic (PLEG): container finished" podID="b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" containerID="d7c32e456b6af37b07e979bd1271c241f8830b0dd5a00d40e927d91cfb7fa2fa" exitCode=0 Jan 21 16:30:03 crc kubenswrapper[4739]: I0121 16:30:03.200351 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" event={"ID":"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3","Type":"ContainerDied","Data":"d7c32e456b6af37b07e979bd1271c241f8830b0dd5a00d40e927d91cfb7fa2fa"} Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.632529 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.784763 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzrmk\" (UniqueName: \"kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk\") pod \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.785024 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume\") pod \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.785059 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume\") pod \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.787061 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume" (OuterVolumeSpecName: "config-volume") pod "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" (UID: "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.791484 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk" (OuterVolumeSpecName: "kube-api-access-lzrmk") pod "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" (UID: "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3"). InnerVolumeSpecName "kube-api-access-lzrmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.791977 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" (UID: "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.887518 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.887730 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.887804 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzrmk\" (UniqueName: \"kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:05 crc kubenswrapper[4739]: I0121 16:30:05.244225 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" event={"ID":"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3","Type":"ContainerDied","Data":"a290350456fae2b9335843e8769389168d81dd0f5bb1c3a249147967b62ec409"} Jan 21 16:30:05 crc kubenswrapper[4739]: I0121 16:30:05.244284 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a290350456fae2b9335843e8769389168d81dd0f5bb1c3a249147967b62ec409" Jan 21 16:30:05 crc kubenswrapper[4739]: I0121 16:30:05.244629 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:05 crc kubenswrapper[4739]: I0121 16:30:05.292090 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27"] Jan 21 16:30:05 crc kubenswrapper[4739]: I0121 16:30:05.303740 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27"] Jan 21 16:30:06 crc kubenswrapper[4739]: I0121 16:30:06.795919 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b5f7037-511d-4ca6-865c-c3a81e4b131d" path="/var/lib/kubelet/pods/1b5f7037-511d-4ca6-865c-c3a81e4b131d/volumes" Jan 21 16:30:07 crc kubenswrapper[4739]: I0121 16:30:07.783326 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:30:07 crc kubenswrapper[4739]: E0121 16:30:07.783549 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:30:18 crc kubenswrapper[4739]: I0121 16:30:18.791551 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:30:18 crc kubenswrapper[4739]: E0121 16:30:18.792406 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.551121 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:20 crc kubenswrapper[4739]: E0121 16:30:20.551841 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" containerName="collect-profiles" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.551854 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" containerName="collect-profiles" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.552052 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" containerName="collect-profiles" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.553356 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.614050 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.730113 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hlc8\" (UniqueName: \"kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.730241 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.730345 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.832706 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.833149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.833362 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.833492 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hlc8\" (UniqueName: \"kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.833704 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.862077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hlc8\" (UniqueName: \"kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.881785 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:21 crc kubenswrapper[4739]: I0121 16:30:21.412704 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:22 crc kubenswrapper[4739]: I0121 16:30:22.396127 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerID="5abd9e25cfe03d37b14bf40b9702e17a4c41022f046ea290633f2395a46ebed1" exitCode=0 Jan 21 16:30:22 crc kubenswrapper[4739]: I0121 16:30:22.396410 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerDied","Data":"5abd9e25cfe03d37b14bf40b9702e17a4c41022f046ea290633f2395a46ebed1"} Jan 21 16:30:22 crc kubenswrapper[4739]: I0121 16:30:22.396442 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerStarted","Data":"45861423f3d7b1e78adfa160aabc76fac1ce24477ed366ee3724ce87bf9b3254"} Jan 21 16:30:25 crc kubenswrapper[4739]: I0121 16:30:25.422960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerStarted","Data":"64646231f9fe0b8595f7b96861e5cdf2780611caa5b75fe467b82c9b0ce30f8b"} Jan 21 16:30:29 crc kubenswrapper[4739]: I0121 16:30:29.458966 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerID="64646231f9fe0b8595f7b96861e5cdf2780611caa5b75fe467b82c9b0ce30f8b" exitCode=0 Jan 21 16:30:29 crc kubenswrapper[4739]: I0121 16:30:29.459172 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerDied","Data":"64646231f9fe0b8595f7b96861e5cdf2780611caa5b75fe467b82c9b0ce30f8b"} Jan 21 16:30:30 crc kubenswrapper[4739]: I0121 16:30:30.470450 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerStarted","Data":"51082cb0f07fc88709dfc7f66cf5b7426df4820efa36b94f50b6cdce6902ec04"} Jan 21 16:30:30 crc kubenswrapper[4739]: I0121 16:30:30.504505 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qp85b" podStartSLOduration=2.983253252 podStartE2EDuration="10.504486844s" podCreationTimestamp="2026-01-21 16:30:20 +0000 UTC" firstStartedPulling="2026-01-21 16:30:22.399314135 +0000 UTC m=+3854.090020399" lastFinishedPulling="2026-01-21 16:30:29.920547727 +0000 UTC m=+3861.611253991" observedRunningTime="2026-01-21 16:30:30.4969582 +0000 UTC m=+3862.187664464" watchObservedRunningTime="2026-01-21 16:30:30.504486844 +0000 UTC m=+3862.195193108" Jan 21 16:30:30 crc kubenswrapper[4739]: I0121 16:30:30.882262 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:30 crc kubenswrapper[4739]: I0121 16:30:30.882698 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:31 crc kubenswrapper[4739]: I0121 16:30:31.929833 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qp85b" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" probeResult="failure" output=< Jan 21 16:30:31 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:30:31 crc kubenswrapper[4739]: > Jan 21 16:30:32 crc kubenswrapper[4739]: I0121 16:30:32.782851 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:30:32 crc kubenswrapper[4739]: E0121 16:30:32.783670 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:30:41 crc kubenswrapper[4739]: I0121 16:30:41.932239 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qp85b" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" probeResult="failure" output=< Jan 21 16:30:41 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:30:41 crc kubenswrapper[4739]: > Jan 21 16:30:43 crc kubenswrapper[4739]: I0121 16:30:43.782694 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:30:43 crc kubenswrapper[4739]: E0121 16:30:43.783327 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:30:50 crc kubenswrapper[4739]: I0121 16:30:50.944085 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:51 crc kubenswrapper[4739]: I0121 16:30:51.000663 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:51 crc kubenswrapper[4739]: I0121 16:30:51.412160 4739 scope.go:117] "RemoveContainer" containerID="95a324e11e4765d006e5026537dcc33be4f21fe30cdf53e6c98bbebdf2786f6c" Jan 21 16:30:51 crc kubenswrapper[4739]: I0121 16:30:51.757552 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:52 crc kubenswrapper[4739]: I0121 16:30:52.663690 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qp85b" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" containerID="cri-o://51082cb0f07fc88709dfc7f66cf5b7426df4820efa36b94f50b6cdce6902ec04" gracePeriod=2 Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.675758 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerID="51082cb0f07fc88709dfc7f66cf5b7426df4820efa36b94f50b6cdce6902ec04" exitCode=0 Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.675950 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerDied","Data":"51082cb0f07fc88709dfc7f66cf5b7426df4820efa36b94f50b6cdce6902ec04"} Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.676073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerDied","Data":"45861423f3d7b1e78adfa160aabc76fac1ce24477ed366ee3724ce87bf9b3254"} Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.676090 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45861423f3d7b1e78adfa160aabc76fac1ce24477ed366ee3724ce87bf9b3254" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.752142 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.760806 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hlc8\" (UniqueName: \"kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8\") pod \"ac9e812f-2593-473d-8591-b4d2a0b581d9\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.761075 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content\") pod \"ac9e812f-2593-473d-8591-b4d2a0b581d9\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.761151 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities\") pod \"ac9e812f-2593-473d-8591-b4d2a0b581d9\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.762397 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities" (OuterVolumeSpecName: "utilities") pod "ac9e812f-2593-473d-8591-b4d2a0b581d9" (UID: "ac9e812f-2593-473d-8591-b4d2a0b581d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.769121 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8" (OuterVolumeSpecName: "kube-api-access-6hlc8") pod "ac9e812f-2593-473d-8591-b4d2a0b581d9" (UID: "ac9e812f-2593-473d-8591-b4d2a0b581d9"). InnerVolumeSpecName "kube-api-access-6hlc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.864406 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.864439 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hlc8\" (UniqueName: \"kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.914690 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac9e812f-2593-473d-8591-b4d2a0b581d9" (UID: "ac9e812f-2593-473d-8591-b4d2a0b581d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.979743 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:54 crc kubenswrapper[4739]: I0121 16:30:54.683958 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:54 crc kubenswrapper[4739]: I0121 16:30:54.717725 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:54 crc kubenswrapper[4739]: I0121 16:30:54.726420 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:54 crc kubenswrapper[4739]: I0121 16:30:54.793468 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" path="/var/lib/kubelet/pods/ac9e812f-2593-473d-8591-b4d2a0b581d9/volumes" Jan 21 16:30:56 crc kubenswrapper[4739]: I0121 16:30:56.782927 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:30:56 crc kubenswrapper[4739]: E0121 16:30:56.783635 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:31:09 crc kubenswrapper[4739]: I0121 16:31:09.782426 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:31:09 crc kubenswrapper[4739]: E0121 16:31:09.783045 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:31:22 crc kubenswrapper[4739]: I0121 16:31:22.785103 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:31:22 crc kubenswrapper[4739]: E0121 16:31:22.785931 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:31:33 crc kubenswrapper[4739]: I0121 16:31:33.783257 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:31:33 crc kubenswrapper[4739]: E0121 16:31:33.783927 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:31:46 crc kubenswrapper[4739]: I0121 16:31:46.782768 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:31:46 crc kubenswrapper[4739]: E0121 16:31:46.783461 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:31:59 crc kubenswrapper[4739]: I0121 16:31:59.783156 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:31:59 crc kubenswrapper[4739]: E0121 16:31:59.783880 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:32:12 crc kubenswrapper[4739]: I0121 16:32:12.783249 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:32:12 crc kubenswrapper[4739]: E0121 16:32:12.784210 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:32:27 crc kubenswrapper[4739]: I0121 16:32:27.783190 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:32:27 crc kubenswrapper[4739]: E0121 16:32:27.784121 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:32:38 crc kubenswrapper[4739]: I0121 16:32:38.790806 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:32:38 crc kubenswrapper[4739]: E0121 16:32:38.793740 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:32:52 crc kubenswrapper[4739]: I0121 16:32:52.783249 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:32:52 crc kubenswrapper[4739]: E0121 16:32:52.784053 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:33:07 crc kubenswrapper[4739]: I0121 16:33:07.570494 4739 patch_prober.go:28] interesting pod/oauth-openshift-56c7c74f4-fqqqm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:33:07 crc kubenswrapper[4739]: I0121 16:33:07.570939 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" podUID="e98b24b8-e20c-447e-86b1-5c4d5d0bc15a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:33:07 crc kubenswrapper[4739]: I0121 16:33:07.585903 4739 patch_prober.go:28] interesting pod/oauth-openshift-56c7c74f4-fqqqm container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:33:07 crc kubenswrapper[4739]: I0121 16:33:07.586254 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" podUID="e98b24b8-e20c-447e-86b1-5c4d5d0bc15a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:33:07 crc kubenswrapper[4739]: I0121 16:33:07.608881 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.641715 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:08 crc kubenswrapper[4739]: E0121 16:33:08.642343 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.642356 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" Jan 21 16:33:08 crc kubenswrapper[4739]: E0121 16:33:08.642373 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="extract-content" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.642379 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="extract-content" Jan 21 16:33:08 crc kubenswrapper[4739]: E0121 16:33:08.642407 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="extract-utilities" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.642413 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="extract-utilities" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.642657 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.643924 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.656091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e"} Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.703895 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.792901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.792981 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzt5x\" (UniqueName: \"kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.793112 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.895581 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.895673 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzt5x\" (UniqueName: \"kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.895788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.896414 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.896714 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:09 crc kubenswrapper[4739]: I0121 16:33:09.319679 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzt5x\" (UniqueName: \"kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:09 crc kubenswrapper[4739]: I0121 16:33:09.567163 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:10 crc kubenswrapper[4739]: I0121 16:33:10.154394 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:10 crc kubenswrapper[4739]: W0121 16:33:10.182077 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b8a9dd0_13e3_44fb_9f6e_b3248c1e3b2e.slice/crio-40c05eb1694952e0963e9b2c28e7331281ba35b39d34c30be27cab6a22993479 WatchSource:0}: Error finding container 40c05eb1694952e0963e9b2c28e7331281ba35b39d34c30be27cab6a22993479: Status 404 returned error can't find the container with id 40c05eb1694952e0963e9b2c28e7331281ba35b39d34c30be27cab6a22993479 Jan 21 16:33:10 crc kubenswrapper[4739]: I0121 16:33:10.675539 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerID="91290856f678df4f690c5377a87ae0f84f368fac268fb4aa659d4ccd1edbc39f" exitCode=0 Jan 21 16:33:10 crc kubenswrapper[4739]: I0121 16:33:10.675872 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerDied","Data":"91290856f678df4f690c5377a87ae0f84f368fac268fb4aa659d4ccd1edbc39f"} Jan 21 16:33:10 crc kubenswrapper[4739]: I0121 16:33:10.675900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerStarted","Data":"40c05eb1694952e0963e9b2c28e7331281ba35b39d34c30be27cab6a22993479"} Jan 21 16:33:10 crc kubenswrapper[4739]: I0121 16:33:10.678676 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:33:11 crc kubenswrapper[4739]: I0121 16:33:11.687605 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerStarted","Data":"d377cb6c11a37a1f7f75c48289e24b38e6f5ca000acca9dc83bc4788a801bba9"} Jan 21 16:33:12 crc kubenswrapper[4739]: I0121 16:33:12.698186 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerID="d377cb6c11a37a1f7f75c48289e24b38e6f5ca000acca9dc83bc4788a801bba9" exitCode=0 Jan 21 16:33:12 crc kubenswrapper[4739]: I0121 16:33:12.698232 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerDied","Data":"d377cb6c11a37a1f7f75c48289e24b38e6f5ca000acca9dc83bc4788a801bba9"} Jan 21 16:33:13 crc kubenswrapper[4739]: I0121 16:33:13.710617 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerStarted","Data":"bbded5c10d0a768a5f80a4149ee227cf5bf5779ece75a4bcd802d5b1da5a2ddd"} Jan 21 16:33:13 crc kubenswrapper[4739]: I0121 16:33:13.738169 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5cw8w" podStartSLOduration=3.188291437 podStartE2EDuration="5.738149449s" podCreationTimestamp="2026-01-21 16:33:08 +0000 UTC" firstStartedPulling="2026-01-21 16:33:10.677667841 +0000 UTC m=+4022.368374105" lastFinishedPulling="2026-01-21 16:33:13.227525853 +0000 UTC m=+4024.918232117" observedRunningTime="2026-01-21 16:33:13.734461618 +0000 UTC m=+4025.425167882" watchObservedRunningTime="2026-01-21 16:33:13.738149449 +0000 UTC m=+4025.428855713" Jan 21 16:33:19 crc kubenswrapper[4739]: I0121 16:33:19.567439 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:19 crc kubenswrapper[4739]: I0121 16:33:19.567978 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:19 crc kubenswrapper[4739]: I0121 16:33:19.764155 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:19 crc kubenswrapper[4739]: I0121 16:33:19.822326 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:20 crc kubenswrapper[4739]: I0121 16:33:20.006943 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:21 crc kubenswrapper[4739]: I0121 16:33:21.775328 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5cw8w" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="registry-server" containerID="cri-o://bbded5c10d0a768a5f80a4149ee227cf5bf5779ece75a4bcd802d5b1da5a2ddd" gracePeriod=2 Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.786104 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerID="bbded5c10d0a768a5f80a4149ee227cf5bf5779ece75a4bcd802d5b1da5a2ddd" exitCode=0 Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.794941 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerDied","Data":"bbded5c10d0a768a5f80a4149ee227cf5bf5779ece75a4bcd802d5b1da5a2ddd"} Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.880311 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.989682 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content\") pod \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.990033 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzt5x\" (UniqueName: \"kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x\") pod \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.990071 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities\") pod \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.992437 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities" (OuterVolumeSpecName: "utilities") pod "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" (UID: "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.997925 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x" (OuterVolumeSpecName: "kube-api-access-wzt5x") pod "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" (UID: "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e"). InnerVolumeSpecName "kube-api-access-wzt5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.026890 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" (UID: "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.092410 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzt5x\" (UniqueName: \"kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.092643 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.092712 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.796999 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerDied","Data":"40c05eb1694952e0963e9b2c28e7331281ba35b39d34c30be27cab6a22993479"} Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.797046 4739 scope.go:117] "RemoveContainer" containerID="bbded5c10d0a768a5f80a4149ee227cf5bf5779ece75a4bcd802d5b1da5a2ddd" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.797085 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.819612 4739 scope.go:117] "RemoveContainer" containerID="d377cb6c11a37a1f7f75c48289e24b38e6f5ca000acca9dc83bc4788a801bba9" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.837832 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.863265 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.883756 4739 scope.go:117] "RemoveContainer" containerID="91290856f678df4f690c5377a87ae0f84f368fac268fb4aa659d4ccd1edbc39f" Jan 21 16:33:24 crc kubenswrapper[4739]: I0121 16:33:24.794657 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" path="/var/lib/kubelet/pods/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e/volumes" Jan 21 16:33:51 crc kubenswrapper[4739]: I0121 16:33:51.569785 4739 scope.go:117] "RemoveContainer" containerID="adfd55d830285bbc54a0003f127db496cdf065c941cf8f5b8afc466c9690516f" Jan 21 16:33:51 crc kubenswrapper[4739]: I0121 16:33:51.595528 4739 scope.go:117] "RemoveContainer" containerID="130ecc6c4407d5cab6945f40930d87f638a29a0cda22143abf160045575717b4" Jan 21 16:35:35 crc kubenswrapper[4739]: I0121 16:35:35.222699 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:35:35 crc kubenswrapper[4739]: I0121 16:35:35.223514 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.254066 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:35:54 crc kubenswrapper[4739]: E0121 16:35:54.260101 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="registry-server" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.260140 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="registry-server" Jan 21 16:35:54 crc kubenswrapper[4739]: E0121 16:35:54.260159 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="extract-utilities" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.260167 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="extract-utilities" Jan 21 16:35:54 crc kubenswrapper[4739]: E0121 16:35:54.260202 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="extract-content" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.260210 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="extract-content" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.260642 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="registry-server" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.262522 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.274148 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.351750 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.351917 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.351981 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd7xz\" (UniqueName: \"kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.453652 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.453781 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.453922 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd7xz\" (UniqueName: \"kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.454180 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.454417 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.474210 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd7xz\" (UniqueName: \"kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.583577 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:55 crc kubenswrapper[4739]: I0121 16:35:55.052566 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:35:56 crc kubenswrapper[4739]: I0121 16:35:56.119706 4739 generic.go:334] "Generic (PLEG): container finished" podID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerID="c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f" exitCode=0 Jan 21 16:35:56 crc kubenswrapper[4739]: I0121 16:35:56.119976 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerDied","Data":"c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f"} Jan 21 16:35:56 crc kubenswrapper[4739]: I0121 16:35:56.120000 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerStarted","Data":"76c436544215c98afc12f0ea818f80948559f153bfea1c190682a9e488a2118b"} Jan 21 16:35:57 crc kubenswrapper[4739]: I0121 16:35:57.130215 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerStarted","Data":"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631"} Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.064105 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-125c-account-create-update-sv8nw"] Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.075578 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-n5z42"] Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.086034 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-125c-account-create-update-sv8nw"] Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.097540 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-n5z42"] Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.148511 4739 generic.go:334] "Generic (PLEG): container finished" podID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerID="6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631" exitCode=0 Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.148564 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerDied","Data":"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631"} Jan 21 16:36:00 crc kubenswrapper[4739]: I0121 16:36:00.161315 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerStarted","Data":"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0"} Jan 21 16:36:00 crc kubenswrapper[4739]: I0121 16:36:00.793310 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294fb480-1e0e-452c-979d-affc62bad155" path="/var/lib/kubelet/pods/294fb480-1e0e-452c-979d-affc62bad155/volumes" Jan 21 16:36:00 crc kubenswrapper[4739]: I0121 16:36:00.794610 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca676c7-1887-4337-b60b-c782c3002f46" path="/var/lib/kubelet/pods/dca676c7-1887-4337-b60b-c782c3002f46/volumes" Jan 21 16:36:04 crc kubenswrapper[4739]: I0121 16:36:04.584091 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:04 crc kubenswrapper[4739]: I0121 16:36:04.584635 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:04 crc kubenswrapper[4739]: I0121 16:36:04.628586 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:04 crc kubenswrapper[4739]: I0121 16:36:04.647713 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sj86g" podStartSLOduration=7.188403138 podStartE2EDuration="10.647695481s" podCreationTimestamp="2026-01-21 16:35:54 +0000 UTC" firstStartedPulling="2026-01-21 16:35:56.122336361 +0000 UTC m=+4187.813042625" lastFinishedPulling="2026-01-21 16:35:59.581628704 +0000 UTC m=+4191.272334968" observedRunningTime="2026-01-21 16:36:00.184194805 +0000 UTC m=+4191.874901069" watchObservedRunningTime="2026-01-21 16:36:04.647695481 +0000 UTC m=+4196.338401745" Jan 21 16:36:05 crc kubenswrapper[4739]: I0121 16:36:05.222953 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:36:05 crc kubenswrapper[4739]: I0121 16:36:05.223007 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:36:05 crc kubenswrapper[4739]: I0121 16:36:05.578233 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:05 crc kubenswrapper[4739]: I0121 16:36:05.628890 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.227018 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sj86g" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="registry-server" containerID="cri-o://29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0" gracePeriod=2 Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.775812 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.944097 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities\") pod \"f6abeeeb-f02d-4dee-a254-f00ad252a579\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.944185 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd7xz\" (UniqueName: \"kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz\") pod \"f6abeeeb-f02d-4dee-a254-f00ad252a579\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.944319 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content\") pod \"f6abeeeb-f02d-4dee-a254-f00ad252a579\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.945865 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities" (OuterVolumeSpecName: "utilities") pod "f6abeeeb-f02d-4dee-a254-f00ad252a579" (UID: "f6abeeeb-f02d-4dee-a254-f00ad252a579"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.961633 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz" (OuterVolumeSpecName: "kube-api-access-wd7xz") pod "f6abeeeb-f02d-4dee-a254-f00ad252a579" (UID: "f6abeeeb-f02d-4dee-a254-f00ad252a579"). InnerVolumeSpecName "kube-api-access-wd7xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.009693 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6abeeeb-f02d-4dee-a254-f00ad252a579" (UID: "f6abeeeb-f02d-4dee-a254-f00ad252a579"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.047354 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.047393 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd7xz\" (UniqueName: \"kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.047404 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.236299 4739 generic.go:334] "Generic (PLEG): container finished" podID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerID="29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0" exitCode=0 Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.236341 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerDied","Data":"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0"} Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.236396 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerDied","Data":"76c436544215c98afc12f0ea818f80948559f153bfea1c190682a9e488a2118b"} Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.236413 4739 scope.go:117] "RemoveContainer" containerID="29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.236541 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.264338 4739 scope.go:117] "RemoveContainer" containerID="6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.282325 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.302478 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.305994 4739 scope.go:117] "RemoveContainer" containerID="c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.343932 4739 scope.go:117] "RemoveContainer" containerID="29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0" Jan 21 16:36:08 crc kubenswrapper[4739]: E0121 16:36:08.344281 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0\": container with ID starting with 29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0 not found: ID does not exist" containerID="29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.344307 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0"} err="failed to get container status \"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0\": rpc error: code = NotFound desc = could not find container \"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0\": container with ID starting with 29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0 not found: ID does not exist" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.344327 4739 scope.go:117] "RemoveContainer" containerID="6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631" Jan 21 16:36:08 crc kubenswrapper[4739]: E0121 16:36:08.344609 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631\": container with ID starting with 6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631 not found: ID does not exist" containerID="6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.344630 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631"} err="failed to get container status \"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631\": rpc error: code = NotFound desc = could not find container \"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631\": container with ID starting with 6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631 not found: ID does not exist" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.344643 4739 scope.go:117] "RemoveContainer" containerID="c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f" Jan 21 16:36:08 crc kubenswrapper[4739]: E0121 16:36:08.344979 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f\": container with ID starting with c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f not found: ID does not exist" containerID="c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.345004 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f"} err="failed to get container status \"c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f\": rpc error: code = NotFound desc = could not find container \"c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f\": container with ID starting with c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f not found: ID does not exist" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.792801 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" path="/var/lib/kubelet/pods/f6abeeeb-f02d-4dee-a254-f00ad252a579/volumes" Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.223001 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.223580 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.223631 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.224475 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.224530 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e" gracePeriod=600 Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.601018 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e" exitCode=0 Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.601072 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e"} Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.601109 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:36:36 crc kubenswrapper[4739]: I0121 16:36:36.613025 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698"} Jan 21 16:36:45 crc kubenswrapper[4739]: I0121 16:36:45.050263 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-hgftl"] Jan 21 16:36:45 crc kubenswrapper[4739]: I0121 16:36:45.060371 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-hgftl"] Jan 21 16:36:46 crc kubenswrapper[4739]: I0121 16:36:46.794544 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe8edfb-cbd4-4468-be6c-40d6af0682ee" path="/var/lib/kubelet/pods/fbe8edfb-cbd4-4468-be6c-40d6af0682ee/volumes" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.701324 4739 scope.go:117] "RemoveContainer" containerID="1fbdaf4d566a04f7481712fb1909970289f16ac610cc2410258dcbbf919b0776" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.729885 4739 scope.go:117] "RemoveContainer" containerID="5abd9e25cfe03d37b14bf40b9702e17a4c41022f046ea290633f2395a46ebed1" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.792068 4739 scope.go:117] "RemoveContainer" containerID="51082cb0f07fc88709dfc7f66cf5b7426df4820efa36b94f50b6cdce6902ec04" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.830271 4739 scope.go:117] "RemoveContainer" containerID="64646231f9fe0b8595f7b96861e5cdf2780611caa5b75fe467b82c9b0ce30f8b" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.857702 4739 scope.go:117] "RemoveContainer" containerID="b6f702ea2dd3ff28c30d00400b0b806729c8217c06fd4cd13b82e7615d978dd8" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.905502 4739 scope.go:117] "RemoveContainer" containerID="6bcd6ee067e29520ec5a3f31d7b83d2d9be6015725c99f0d8474b82103c528e6" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.287425 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:14 crc kubenswrapper[4739]: E0121 16:38:14.288280 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="extract-content" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.288292 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="extract-content" Jan 21 16:38:14 crc kubenswrapper[4739]: E0121 16:38:14.288305 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="registry-server" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.288311 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="registry-server" Jan 21 16:38:14 crc kubenswrapper[4739]: E0121 16:38:14.288324 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="extract-utilities" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.288336 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="extract-utilities" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.288548 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="registry-server" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.290427 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.328884 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.399107 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.399262 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.399282 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc9w9\" (UniqueName: \"kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.501619 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.501675 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc9w9\" (UniqueName: \"kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.501740 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.502427 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.502585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.530778 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc9w9\" (UniqueName: \"kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.616127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:15 crc kubenswrapper[4739]: I0121 16:38:15.203317 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:15 crc kubenswrapper[4739]: I0121 16:38:15.434929 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerStarted","Data":"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e"} Jan 21 16:38:15 crc kubenswrapper[4739]: I0121 16:38:15.434968 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerStarted","Data":"8c09f125c21f41afeeb510b08716522de590069719aac7756ba3e8de1078cac3"} Jan 21 16:38:16 crc kubenswrapper[4739]: I0121 16:38:16.450410 4739 generic.go:334] "Generic (PLEG): container finished" podID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerID="3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e" exitCode=0 Jan 21 16:38:16 crc kubenswrapper[4739]: I0121 16:38:16.450512 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerDied","Data":"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e"} Jan 21 16:38:16 crc kubenswrapper[4739]: I0121 16:38:16.452953 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:38:17 crc kubenswrapper[4739]: I0121 16:38:17.463245 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerStarted","Data":"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560"} Jan 21 16:38:18 crc kubenswrapper[4739]: I0121 16:38:18.471625 4739 generic.go:334] "Generic (PLEG): container finished" podID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerID="d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560" exitCode=0 Jan 21 16:38:18 crc kubenswrapper[4739]: I0121 16:38:18.471675 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerDied","Data":"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560"} Jan 21 16:38:19 crc kubenswrapper[4739]: I0121 16:38:19.480028 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerStarted","Data":"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873"} Jan 21 16:38:19 crc kubenswrapper[4739]: I0121 16:38:19.504531 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mmz2" podStartSLOduration=2.856973112 podStartE2EDuration="5.504511492s" podCreationTimestamp="2026-01-21 16:38:14 +0000 UTC" firstStartedPulling="2026-01-21 16:38:16.45270926 +0000 UTC m=+4328.143415524" lastFinishedPulling="2026-01-21 16:38:19.10024764 +0000 UTC m=+4330.790953904" observedRunningTime="2026-01-21 16:38:19.500866684 +0000 UTC m=+4331.191572948" watchObservedRunningTime="2026-01-21 16:38:19.504511492 +0000 UTC m=+4331.195217756" Jan 21 16:38:24 crc kubenswrapper[4739]: I0121 16:38:24.617324 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:24 crc kubenswrapper[4739]: I0121 16:38:24.617896 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:24 crc kubenswrapper[4739]: I0121 16:38:24.681364 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:25 crc kubenswrapper[4739]: I0121 16:38:25.584646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:25 crc kubenswrapper[4739]: I0121 16:38:25.635351 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:27 crc kubenswrapper[4739]: I0121 16:38:27.553064 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4mmz2" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="registry-server" containerID="cri-o://e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873" gracePeriod=2 Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.092192 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.191308 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc9w9\" (UniqueName: \"kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9\") pod \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.191475 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content\") pod \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.191513 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities\") pod \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.192479 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities" (OuterVolumeSpecName: "utilities") pod "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" (UID: "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.214196 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9" (OuterVolumeSpecName: "kube-api-access-cc9w9") pod "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" (UID: "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d"). InnerVolumeSpecName "kube-api-access-cc9w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.251975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" (UID: "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.293641 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc9w9\" (UniqueName: \"kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9\") on node \"crc\" DevicePath \"\"" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.293939 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.294023 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.562926 4739 generic.go:334] "Generic (PLEG): container finished" podID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerID="e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873" exitCode=0 Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.562995 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerDied","Data":"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873"} Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.563018 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.563177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerDied","Data":"8c09f125c21f41afeeb510b08716522de590069719aac7756ba3e8de1078cac3"} Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.563201 4739 scope.go:117] "RemoveContainer" containerID="e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.592873 4739 scope.go:117] "RemoveContainer" containerID="d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.600203 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.611526 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.621410 4739 scope.go:117] "RemoveContainer" containerID="3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.668925 4739 scope.go:117] "RemoveContainer" containerID="e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873" Jan 21 16:38:28 crc kubenswrapper[4739]: E0121 16:38:28.669411 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873\": container with ID starting with e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873 not found: ID does not exist" containerID="e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.669452 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873"} err="failed to get container status \"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873\": rpc error: code = NotFound desc = could not find container \"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873\": container with ID starting with e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873 not found: ID does not exist" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.669477 4739 scope.go:117] "RemoveContainer" containerID="d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560" Jan 21 16:38:28 crc kubenswrapper[4739]: E0121 16:38:28.669905 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560\": container with ID starting with d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560 not found: ID does not exist" containerID="d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.669937 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560"} err="failed to get container status \"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560\": rpc error: code = NotFound desc = could not find container \"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560\": container with ID starting with d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560 not found: ID does not exist" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.669959 4739 scope.go:117] "RemoveContainer" containerID="3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e" Jan 21 16:38:28 crc kubenswrapper[4739]: E0121 16:38:28.670422 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e\": container with ID starting with 3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e not found: ID does not exist" containerID="3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.670463 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e"} err="failed to get container status \"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e\": rpc error: code = NotFound desc = could not find container \"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e\": container with ID starting with 3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e not found: ID does not exist" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.792937 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" path="/var/lib/kubelet/pods/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d/volumes" Jan 21 16:38:56 crc kubenswrapper[4739]: I0121 16:38:56.301139 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="7a559158-ae1f-4b55-bf71-90061b51b807" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.164:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:39:31 crc kubenswrapper[4739]: I0121 16:39:31.842624 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f2fec0ae-aaf7-434d-b425-7b3321505810" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 21 16:39:31 crc kubenswrapper[4739]: E0121 16:39:31.952884 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:32 crc kubenswrapper[4739]: E0121 16:39:32.053859 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:32 crc kubenswrapper[4739]: E0121 16:39:32.254986 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:32 crc kubenswrapper[4739]: E0121 16:39:32.656011 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:33 crc kubenswrapper[4739]: E0121 16:39:33.457067 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:35 crc kubenswrapper[4739]: E0121 16:39:35.057356 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:38 crc kubenswrapper[4739]: E0121 16:39:38.258334 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:43 crc kubenswrapper[4739]: E0121 16:39:43.258802 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623035 4739 reflector.go:484] object-"openshift-apiserver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623358 4739 reflector.go:484] object-"openshift-console"/"console-oauth-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623648 4739 reflector.go:484] object-"openshift-cluster-samples-operator"/"samples-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623678 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623712 4739 reflector.go:484] object-"openshift-apiserver"/"audit-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623841 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623891 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623923 4739 reflector.go:484] object-"openshift-ingress-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623963 4739 reflector.go:484] object-"openshift-nmstate"/"plugin-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623985 4739 reflector.go:484] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vbc8p": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624013 4739 reflector.go:484] object-"openstack"/"cert-glance-default-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624065 4739 reflector.go:484] object-"openstack"/"cert-ceilometer-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624094 4739 reflector.go:484] object-"metallb-system"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624111 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624129 4739 reflector.go:484] object-"openshift-apiserver"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624161 4739 reflector.go:484] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zqdld": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624208 4739 reflector.go:484] object-"openstack"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624251 4739 reflector.go:484] object-"openstack"/"cert-nova-novncproxy-cell1-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624284 4739 reflector.go:484] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-l9w2m": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624316 4739 reflector.go:484] object-"openstack"/"nova-cell1-novncproxy-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624360 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-login": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624418 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624445 4739 reflector.go:484] object-"metallb-system"/"metallb-webhook-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624464 4739 reflector.go:484] object-"openstack"/"keystone-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624521 4739 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624544 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-cliconfig": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624581 4739 reflector.go:484] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624608 4739 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624613 4739 reflector.go:484] object-"openshift-authentication-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624629 4739 reflector.go:484] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624648 4739 reflector.go:484] object-"openshift-marketplace"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624666 4739 reflector.go:484] object-"openstack"/"manila-manila-dockercfg-c8ppn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624681 4739 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624699 4739 reflector.go:484] object-"openstack"/"ovnnorthd-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624560 4739 reflector.go:484] object-"openshift-authentication"/"audit": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624742 4739 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624748 4739 reflector.go:484] object-"cert-manager"/"cert-manager-cainjector-dockercfg-hcwtd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624306 4739 reflector.go:484] object-"openstack"/"cert-keystone-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624781 4739 reflector.go:484] object-"openshift-controller-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624797 4739 reflector.go:484] object-"openshift-console"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624835 4739 reflector.go:484] object-"openshift-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624701 4739 reflector.go:484] object-"cert-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624897 4739 reflector.go:484] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624911 4739 reflector.go:484] object-"openstack"/"cert-ovndbcluster-nb-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624931 4739 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624947 4739 reflector.go:484] object-"openstack"/"glance-default-internal-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624952 4739 reflector.go:484] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624800 4739 reflector.go:484] object-"openshift-apiserver"/"etcd-serving-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624986 4739 reflector.go:484] object-"openshift-route-controller-manager"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625014 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"encryption-config-1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625044 4739 reflector.go:484] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625064 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625086 4739 reflector.go:484] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625091 4739 reflector.go:484] object-"openstack"/"cert-ovnnorthd-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625109 4739 reflector.go:484] object-"openstack"/"tempest-tests-tempest-custom-data-s0": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625117 4739 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625143 4739 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625174 4739 reflector.go:484] object-"openstack"/"combined-ca-bundle": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625148 4739 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625211 4739 reflector.go:484] object-"openstack"/"keystone-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625216 4739 reflector.go:484] object-"openshift-console"/"console-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625243 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625249 4739 reflector.go:484] object-"openstack"/"cert-glance-default-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625276 4739 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625298 4739 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625325 4739 reflector.go:484] object-"openshift-ingress"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625359 4739 reflector.go:484] object-"cert-manager"/"cert-manager-dockercfg-2ngl6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625379 4739 reflector.go:484] object-"openstack"/"barbican-worker-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625417 4739 reflector.go:484] object-"openstack"/"horizon-horizon-dockercfg-5hs8m": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625453 4739 reflector.go:484] object-"openshift-nmstate"/"default-dockercfg-t5zpb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625486 4739 reflector.go:484] object-"cert-manager"/"cert-manager-webhook-dockercfg-l69gm": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625517 4739 reflector.go:484] object-"openstack"/"neutron-httpd-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625553 4739 reflector.go:484] object-"openshift-multus"/"multus-admission-controller-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625576 4739 reflector.go:484] object-"openshift-ingress"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625604 4739 reflector.go:484] object-"openshift-controller-manager"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625638 4739 reflector.go:484] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-n2mhx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625691 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"audit-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625725 4739 reflector.go:484] object-"openstack"/"ceilometer-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625762 4739 reflector.go:484] object-"metallb-system"/"controller-dockercfg-nhqx4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625801 4739 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625875 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625913 4739 reflector.go:484] object-"openshift-cluster-version"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625950 4739 reflector.go:484] object-"openshift-ingress-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625990 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626027 4739 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626040 4739 reflector.go:484] object-"openshift-machine-config-operator"/"mcc-proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626069 4739 reflector.go:484] object-"openshift-ingress"/"service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626092 4739 reflector.go:484] object-"openstack"/"horizon-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626125 4739 reflector.go:484] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626150 4739 reflector.go:484] object-"openshift-console"/"default-dockercfg-chnjx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626164 4739 reflector.go:484] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cxqd4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626197 4739 reflector.go:484] object-"openshift-machine-api"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626233 4739 reflector.go:484] object-"openshift-apiserver"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626241 4739 reflector.go:484] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626255 4739 reflector.go:484] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626284 4739 reflector.go:484] object-"openshift-authentication-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626200 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626339 4739 reflector.go:484] object-"openstack"/"cert-cinder-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626403 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"etcd-serving-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626455 4739 reflector.go:484] object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-l9kt6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626526 4739 reflector.go:484] object-"openshift-machine-api"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626550 4739 reflector.go:484] object-"openstack"/"cert-placement-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626597 4739 reflector.go:484] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626587 4739 reflector.go:484] object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-rjqnz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626647 4739 reflector.go:484] object-"openshift-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626690 4739 reflector.go:484] object-"openstack"/"placement-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626721 4739 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626755 4739 reflector.go:484] object-"openshift-console-operator"/"console-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626794 4739 reflector.go:484] object-"openshift-authentication-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626853 4739 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626906 4739 reflector.go:484] object-"metallb-system"/"frr-k8s-daemon-dockercfg-q2nzx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626948 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626979 4739 reflector.go:484] object-"openshift-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627036 4739 reflector.go:484] object-"openshift-nmstate"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627071 4739 reflector.go:484] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627107 4739 reflector.go:484] object-"openshift-image-registry"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627126 4739 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627167 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627216 4739 reflector.go:484] object-"openstack-operators"/"infra-operator-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627228 4739 reflector.go:484] object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sd482": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627269 4739 reflector.go:484] object-"openstack"/"manila-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627318 4739 reflector.go:484] object-"openstack"/"cert-barbican-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627353 4739 reflector.go:484] object-"openstack"/"nova-nova-dockercfg-lfw7x": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627385 4739 reflector.go:484] object-"openshift-dns"/"dns-default": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627427 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-error": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627462 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-operator-images": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627500 4739 reflector.go:484] object-"openshift-marketplace"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627534 4739 reflector.go:484] object-"openstack"/"cert-rabbitmq-cell1-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625015 4739 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625045 4739 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625066 4739 reflector.go:484] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627586 4739 reflector.go:484] object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z2cw7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627605 4739 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626126 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627645 4739 reflector.go:484] object-"openshift-ingress-canary"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627675 4739 reflector.go:484] object-"openstack"/"openstack-config-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627708 4739 reflector.go:484] object-"openshift-console"/"service-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627739 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627774 4739 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627804 4739 reflector.go:484] object-"openshift-console-operator"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627857 4739 reflector.go:484] object-"openshift-apiserver"/"encryption-config-1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627889 4739 reflector.go:484] object-"openshift-apiserver-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627923 4739 reflector.go:484] object-"openshift-nmstate"/"nmstate-operator-dockercfg-qvcx2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627957 4739 reflector.go:484] object-"openstack"/"nova-cell0-conductor-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627987 4739 reflector.go:484] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628020 4739 reflector.go:484] object-"openshift-apiserver"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628053 4739 reflector.go:484] object-"openstack-operators"/"webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628084 4739 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628118 4739 reflector.go:484] object-"hostpath-provisioner"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628149 4739 reflector.go:484] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628182 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"pprof-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628214 4739 reflector.go:484] object-"openshift-ingress"/"router-dockercfg-zdk86": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628244 4739 reflector.go:484] object-"metallb-system"/"metallb-memberlist": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628276 4739 reflector.go:484] object-"openshift-controller-manager"/"openshift-global-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628307 4739 reflector.go:484] object-"openshift-service-ca"/"signing-key": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628338 4739 reflector.go:484] object-"openshift-service-ca-operator"/"service-ca-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628371 4739 reflector.go:484] object-"openshift-controller-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628401 4739 reflector.go:484] object-"openstack"/"cert-memcached-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628430 4739 reflector.go:484] object-"openstack"/"openstack-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628496 4739 reflector.go:484] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628530 4739 reflector.go:484] object-"openshift-cluster-version"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628661 4739 reflector.go:484] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629440 4739 reflector.go:484] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629470 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-session": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629535 4739 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629631 4739 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629697 4739 reflector.go:484] object-"openshift-marketplace"/"marketplace-operator-metrics": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629732 4739 reflector.go:484] object-"openstack"/"cert-neutron-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629786 4739 reflector.go:484] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629838 4739 reflector.go:484] object-"openstack"/"cert-ovndbcluster-sb-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629874 4739 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629919 4739 reflector.go:484] object-"openshift-nmstate"/"nginx-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629955 4739 reflector.go:484] object-"openstack"/"glance-default-external-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629982 4739 reflector.go:484] object-"openstack"/"dnsmasq-dns-dockercfg-wk8pg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630018 4739 reflector.go:484] object-"openshift-authentication"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630052 4739 reflector.go:484] object-"openstack"/"neutron-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630085 4739 reflector.go:484] object-"openshift-ingress"/"router-metrics-certs-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630112 4739 reflector.go:484] object-"openstack"/"dns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630155 4739 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-images": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630188 4739 reflector.go:484] object-"openstack"/"ovndbcluster-nb-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630223 4739 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630248 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630284 4739 reflector.go:484] object-"openstack"/"nova-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630320 4739 reflector.go:484] object-"openstack"/"cert-nova-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630374 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630405 4739 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.631196 4739 reflector.go:484] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nm8tb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.631230 4739 reflector.go:484] object-"openstack"/"cinder-volume-volume1-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.632925 4739 reflector.go:484] object-"openstack"/"openstack-cell1-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.633627 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.633662 4739 reflector.go:484] object-"openshift-etcd-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.635803 4739 reflector.go:484] object-"openshift-marketplace"/"marketplace-trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.669145 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="aa850895-9a18-4cff-83f8-bf7eea44559e" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.102:11211: i/o timeout" Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627141 4739 reflector.go:484] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: E0121 16:40:06.672765 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.635978 4739 reflector.go:484] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.637624 4739 reflector.go:484] object-"openstack"/"cert-galera-openstack-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.638012 4739 reflector.go:484] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.640542 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.643133 4739 reflector.go:484] object-"openstack"/"ovncontroller-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.643214 4739 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.645291 4739 reflector.go:484] object-"openshift-ingress-operator"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.646954 4739 reflector.go:484] object-"openstack"/"barbican-keystone-listener-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.647682 4739 reflector.go:484] object-"openshift-machine-config-operator"/"mco-proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.647969 4739 reflector.go:484] object-"openstack"/"ovncontroller-metrics-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.648100 4739 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.649874 4739 reflector.go:484] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9xwj5": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.649909 4739 reflector.go:484] object-"openstack"/"cinder-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.650139 4739 reflector.go:484] object-"openshift-route-controller-manager"/"client-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.651551 4739 reflector.go:484] object-"openstack"/"dns-svc": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.651574 4739 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.656990 4739 reflector.go:484] object-"openstack"/"horizon": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.658067 4739 reflector.go:484] object-"openstack"/"neutron-neutron-dockercfg-nsbps": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.659968 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.660939 4739 reflector.go:484] object-"metallb-system"/"frr-startup": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.660977 4739 reflector.go:484] object-"openshift-ingress"/"router-stats-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.660994 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.661122 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.663315 4739 reflector.go:484] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.663351 4739 reflector.go:484] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zwxcg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.663498 4739 reflector.go:484] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.665987 4739 reflector.go:484] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.666018 4739 reflector.go:484] object-"openstack"/"keystone": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.667191 4739 reflector.go:484] object-"openstack"/"horizon-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.668034 4739 reflector.go:484] object-"openstack"/"cinder-backup-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.668643 4739 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.669219 4739 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.669621 4739 reflector.go:484] object-"openshift-apiserver"/"image-import-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.669676 4739 reflector.go:484] object-"openstack"/"cert-ovn-metrics": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.670330 4739 reflector.go:484] object-"openstack"/"default-dockercfg-c9nsw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.670353 4739 reflector.go:484] object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-z95dr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.670948 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-server-dockercfg-hxngv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.708579 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.671631 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.672310 4739 reflector.go:484] object-"openshift-authentication-operator"/"authentication-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.674514 4739 reflector.go:484] object-"openstack"/"cert-manila-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.675376 4739 reflector.go:484] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zrszd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.681022 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.682974 4739 reflector.go:484] object-"openshift-route-controller-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.683322 4739 reflector.go:484] object-"metallb-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.683567 4739 reflector.go:484] object-"metallb-system"/"frr-k8s-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.692587 4739 reflector.go:484] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.695174 4739 reflector.go:484] object-"openshift-config-operator"/"config-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.695188 4739 reflector.go:484] object-"openstack"/"cert-nova-metadata-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.696312 4739 reflector.go:484] object-"openstack-operators"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.698006 4739 reflector.go:484] object-"openstack"/"openstack-edpm-ipam": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.698047 4739 reflector.go:484] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.698073 4739 reflector.go:484] object-"openshift-console"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.698092 4739 reflector.go:484] object-"openshift-apiserver"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.705296 4739 reflector.go:484] object-"openstack"/"placement-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.705316 4739 reflector.go:484] object-"openshift-dns-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.705332 4739 reflector.go:484] object-"openstack"/"memcached-memcached-dockercfg-6ntnw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.705348 4739 reflector.go:484] object-"openshift-machine-config-operator"/"node-bootstrapper-token": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.705367 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.710178 4739 reflector.go:484] object-"openstack"/"rabbitmq-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.711646 4739 reflector.go:484] object-"openshift-console"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.711682 4739 reflector.go:484] object-"openstack"/"ovsdbserver-sb": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.715777 4739 reflector.go:484] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-q8zfr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.725649 4739 trace.go:236] Trace[445167548]: "Calculate volume metrics of metrics-certs for pod openshift-ingress/router-default-5444994796-hm72p" (21-Jan-2026 16:39:31.883) (total time: 34780ms): Jan 21 16:40:06 crc kubenswrapper[4739]: Trace[445167548]: [34.780677533s] [34.780677533s] END Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.726302 4739 trace.go:236] Trace[325841641]: "Calculate volume metrics of trusted-ca-bundle for pod openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" (21-Jan-2026 16:39:31.866) (total time: 34779ms): Jan 21 16:40:06 crc kubenswrapper[4739]: Trace[325841641]: [34.779727986s] [34.779727986s] END Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.728619 4739 reflector.go:484] object-"openstack"/"manila-share-share1-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.728733 4739 reflector.go:484] object-"openshift-image-registry"/"installation-pull-secrets": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.729263 4739 reflector.go:484] object-"openshift-dns-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.729331 4739 reflector.go:484] object-"openstack"/"nova-cell1-conductor-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.729379 4739 reflector.go:484] object-"openstack"/"glance-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.729405 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.729431 4739 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.730511 4739 reflector.go:484] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mm7j6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.733033 4739 reflector.go:484] object-"metallb-system"/"manager-account-dockercfg-g7lpv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.748401 4739 trace.go:236] Trace[1083336257]: "iptables ChainExists" (21-Jan-2026 16:39:31.954) (total time: 34793ms): Jan 21 16:40:06 crc kubenswrapper[4739]: Trace[1083336257]: [34.793417249s] [34.793417249s] END Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.750805 4739 reflector.go:484] object-"openstack"/"cert-keystone-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.752194 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.797693026s: [/var/lib/containers/storage/overlay/8d9b961a66de93b3e59111f673f1f19df11a03a0dee1ae680050b8605b588f51/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.753737 4739 reflector.go:484] object-"openshift-console-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.755625 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-service-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.756625 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.190:5671: i/o timeout" Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.764935 4739 reflector.go:484] object-"openshift-controller-manager"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.765008 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.765049 4739 reflector.go:484] object-"openstack"/"manila-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625882 4739 reflector.go:484] object-"openstack"/"cert-manila-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.747890 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.793679206s: [/var/lib/containers/storage/overlay/f9bada9b35b9deb9b74f1374a417ebebb5ddbce6ffb0f382957da9670619d5a4/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.767944 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.811049149s: [/var/lib/containers/storage/overlay/e7a11c75cbb5edae5aa8e41ba61d6931b305cb6adb285f312047f8c806910dc4/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.768428 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.808338766s: [/var/lib/containers/storage/overlay/d3a91154fc2f9dd69f74e1db80cbb5fd689c98f7e0ce08214cd28201d59f0a24/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.770000 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.809265791s: [/var/lib/containers/storage/overlay/dfbd4a906f1b2b76d7c5c5776d7c380618b7c45cc9c3da7b99b683a9ee486aac/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.770596 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.809195909s: [/var/lib/containers/storage/overlay/d6ad62b06c2b60c7456f7a17d7d5d12fcf18af098b116ccf5741e93471a56623/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.778883 4739 reflector.go:484] object-"openstack"/"cert-neutron-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.784779 4739 reflector.go:484] object-"openshift-dns"/"dns-dockercfg-jwfmh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.785692 4739 reflector.go:484] object-"openstack"/"openstackclient-openstackclient-dockercfg-49v78": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.792835 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.929330441s: [/var/lib/containers/storage/overlay/ebe2325978d8c7d466c16cb6584280fe4c78a8a445a928c19dc2f9536b3650f5/diff /var/log/pods/openshift-image-registry_image-registry-66df7c8f76-t5799_ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7/registry/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.793625 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.831640301s: [/var/lib/containers/storage/overlay/25d40a4e4a01895cbd296666883c85cdbd318ad1570084b6bd656a798234c93d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.793861 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.831743092s: [/var/lib/containers/storage/overlay/ab3cb151afbd63b13d8af8a421f96a67d06eb95920f2e012e4eb44ef6a7a9d58/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.793906 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.831789214s: [/var/lib/containers/storage/overlay/909ad070504a5cb6e034b94c2aac48b45f984cd2c311d41d12cfff24f35ec627/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.748299 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.757973113s: [/var/lib/containers/storage/overlay/cbee4cce5015d7e8fee31960cade04cfd90d66f8fe16a9ef6c2ef007c39a5ce7/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.799957 4739 reflector.go:484] object-"openshift-ingress"/"router-certs-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.800454 4739 reflector.go:484] object-"openshift-service-ca"/"signing-cabundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.807498 4739 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624295 4739 reflector.go:484] object-"openshift-network-diagnostics"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.807559 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.823638 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.824483 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.862355356s: [/var/lib/containers/storage/overlay/07357dfd86c3e67e894bf615a2c0afdcaa85c0fb1e1f6272745f42caac136b7d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.824528 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.862408698s: [/var/lib/containers/storage/overlay/bdddc467575f25318e52dbdef763bcb9fc8cf909c2e9ab0030bf88ea4fe1c152/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.824558 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.862435279s: [/var/lib/containers/storage/overlay/f1004402dcc2ba2c2fc35ded662d21d78489da0b0acc9a86765c647cce6b2a12/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.825697 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.863458977s: [/var/lib/containers/storage/overlay/68bb6ce1ef9dc9d0097e6a158cdc205f5248c1d68a6a27dc7e59a8360b5c9084/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827356 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864780563s: [/var/lib/containers/storage/overlay/024bb67732177bbd521d69c7e909848843c1640553b19db3df0f28e2e7eec1b3/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827410 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864247008s: [/var/lib/containers/storage/overlay/c8816f9cf43c161e973596daf9223fa91dcecdcca7d13b5b08544a1847424b25/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827450 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864285449s: [/var/lib/containers/storage/overlay/6828f01779d4fcbaf1e3512fe7c74d97614034da649608c9acba14773abc80b6/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827486 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.86432s: [/var/lib/containers/storage/overlay/52b77254503b0c4285c70180af6cfa2fb18180ef5f6ba111fa3c3fc51c8444b6/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827524 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864357031s: [/var/lib/containers/storage/overlay/b22c292ddc66217f0de736b44a863258c7599253f9f558ca003e60c89d3861b5/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827560 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864392282s: [/var/lib/containers/storage/overlay/8e4b51d55790fed940afe3c6801781f6d3c9aa2feae37009bc883539ec512ee6/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827599 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864429573s: [/var/lib/containers/storage/overlay/053b76691dad2ce7a757dea43469bf9a5173366b591011cf6c27e1dc96097757/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827634 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864363052s: [/var/lib/containers/storage/overlay/a0821c411c1e5ca39a3de84f53e32cbf49f262703054c6ece25b0dd493fac2f0/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.828925 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865333097s: [/var/lib/containers/storage/overlay/2921362bd60e23d5af204064e7f4097ca4c8948c6bfa11286f7234759de34098/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.828990 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.86505155s: [/var/lib/containers/storage/overlay/92fe7b1b407d65e5591c8b2a5435997bad5bbd7dece4aebb598d47d57b4a19cc/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829030 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865088111s: [/var/lib/containers/storage/overlay/f8a49902f6047dd912feb89744918d1d417d8d61410e1101362aa9608bbb7059/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829068 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865125362s: [/var/lib/containers/storage/overlay/d4e544d53ffa2d47aa7fdc9c4bd008c27f14b48c199ff79a1a7964aae920314f/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829109 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865164823s: [/var/lib/containers/storage/overlay/73c44b68f94badbd48c59cb8ea9145569f1fa28a38bba417edad79a9001b6d1c/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829148 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865202094s: [/var/lib/containers/storage/overlay/77d67ab8b3a6fb608aa21ec07213cf87ff9cd5ea152c3cd2ab148aa46fc31437/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829190 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865243315s: [/var/lib/containers/storage/overlay/7ed3669a36afd250de278fb3369e46394c6dc19f620eddfca84d50750eadcfcf/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829226 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865277826s: [/var/lib/containers/storage/overlay/4c19e2c7eebfa0c3240697fbcc7e023b8761d98368f8b84944bd6d1a54890a1f/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829261 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865310797s: [/var/lib/containers/storage/overlay/e05cce1c693dcdf843c0a0f3df7b759c46a1ba404c9d452ba345d76be376bfe2/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829299 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865348128s: [/var/lib/containers/storage/overlay/0a7841679b7462ce69aba5893268cddbb7bb69221ba36331f4971c79b58258fd/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829341 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865388308s: [/var/lib/containers/storage/overlay/b66afd63224033e1cf6f791bde175fa07a2d48b43decb9fd10253f41ac4b92df/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829385 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865429849s: [/var/lib/containers/storage/overlay/df859f5510e225258759e92baf823be691bf3f9b5b1ee4d64583840a456f1c23/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829425 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865468531s: [/var/lib/containers/storage/overlay/5acba505b0bd4c70980152e92d00aaa29db286420c56451b23299720195ae132/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829460 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865487501s: [/var/lib/containers/storage/overlay/0e4efc4f232eeef82a5080074aadcc4d740327569dfda5e0b5a72939c48b279b/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829514 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865506132s: [/var/lib/containers/storage/overlay/194af09a42bc138702ca4d2360feb69bbc747469dce8b9a7b2a2c8ea6932f1a4/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829562 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865572394s: [/var/lib/containers/storage/overlay/4449eba3250dd1cea3487aa05c00bfc560ff8bb48259f0e08365c63bfbe3f09a/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.836231 4739 reflector.go:484] object-"openstack"/"galera-openstack-dockercfg-5d5ff": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.838671 4739 reflector.go:484] object-"metallb-system"/"frr-k8s-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.838730 4739 reflector.go:484] object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xzrtm": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.849944 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.89629151s: [/var/lib/containers/storage/overlay/ab0c5f2722f7b1d4b5cf3c4c8f440f80f7b60264b7861598385d9ab4780a7d95/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850333 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885633901s: [/var/lib/containers/storage/overlay/a8c4f45da950f3483f96190df7477f70fc4e30e73397abf0924a4d1d691f4424/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850380 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885680722s: [/var/lib/containers/storage/overlay/5a4c5d04e81dcb31e65a15d642df38c1abf9d3dd0cc9c931641e7b923deca7f5/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850418 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885717313s: [/var/lib/containers/storage/overlay/9c27d9c05089a8f5eab3ce59d8dda820e772ca6e406dd3befc1d6f446d05a6ad/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850416 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885705203s: [/var/lib/containers/storage/overlay/2676005b51eda083dbbe929c40f8692f6880008686a19fdb0376c593a8c82f56/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850457 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885752224s: [/var/lib/containers/storage/overlay/68e5b2b093904c005724c5ca8a43e79278049271e209c92dbbdec191208d0298/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850482 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885777755s: [/var/lib/containers/storage/overlay/d82740619151d8a5e08c4f23f19f8bf10a5a70aac81ae4fc91b3e52af4c29c9d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850497 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885789735s: [/var/lib/containers/storage/overlay/89396b787ad96ef2ce8f002faf99568bf2d78aa3ffc55355bc14cc45f43f5753/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850526 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885816826s: [/var/lib/containers/storage/overlay/d84fc9dfca018264be0ac8a518c8581aeff83c23ff417afb2ddbc847c04e5346/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850544 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885831475s: [/var/lib/containers/storage/overlay/c744a9116c2739767774fc274ef290afc3baa73354d7fa056877c9d740df6f69/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850567 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885583728s: [/var/lib/containers/storage/overlay/b32efabf521a80a22c268a38423d8948c1259d57e6072c864f1f2e4c0a495826/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857383 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.891428599s: [/var/lib/containers/storage/overlay/c83a83e7ea2b164771edde7d4a5d599714ea27ecff988b25b76888c1a7d04be8/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857757 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.717074894s: [/var/lib/containers/storage/overlay/43d601e9221bd905f1c3f74abfff2ad5cb68f74c102fc8257ec530a6e4ad7f40/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857864 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.703881366s: [/var/lib/containers/storage/overlay/c5dd45e5a4207f724b55e50ed27f6585c49b46348b43d36cb1e54519d1e8fb94/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857906 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.697471312s: [/var/lib/containers/storage/overlay/8b85187555a27fac921785f0a2290dfd09dc33c57d830bdb083aec82c3fa9191/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857945 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.617138505s: [/var/lib/containers/storage/overlay/664f34268ba6fa04c3f7f317fcdc65e830ac5800029db176d0a94e86ab6bc658/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857980 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.617169036s: [/var/lib/containers/storage/overlay/55209e79823bedab116bcf140ed08580d3a9cd347602c4bafe0b285e00571d61/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.858016 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.617189116s: [/var/lib/containers/storage/overlay/4df14cc9f04be978a2920745d0850afb04872863fbc255c3ed94b17fcde737f0/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.859301 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 31.119494947s: [/var/lib/containers/storage/overlay/f40796fe6de1b72957a505f4727632123fe35f8a108a6017df3df76bf4892816/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.862084 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 29.876782161s: [/var/lib/containers/storage/overlay/177c2a929fc27b23423ac3e0badf94434d4984cd0f9762da0270ba3e93734c3e/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.862628 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 29.300234843s: [/var/lib/containers/storage/overlay/a1ad93d726e77e54f2cf2198aeff57c2f28a559738711df4bb64f6f7944fca25/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.863069 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.966917018s: [/var/lib/containers/storage/overlay/68703b7b2cfad5c52eba306e25d35eb0f6632400814181b726863474ae018111/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.863876 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.817313076s: [/var/lib/containers/storage/overlay/a0f43a52a884c3284a2defaec8f9ade2217b43d80dd5225a5798c27db8332e33/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.863927 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.747246528s: [/var/lib/containers/storage/overlay/9b9f47ac50f38bde36a8f6dd5ada351815763da2a4f0d09a482bd9da9fd054b5/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.863969 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.653207276s: [/var/lib/containers/storage/overlay/ab6f34b3893065825d332b29ec92e6079300ef8edaff73aa5ca08db520a18581/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864007 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.637063267s: [/var/lib/containers/storage/overlay/b7116c02d069baece382411454cd643c3cca2ca3954330b4172415b2aa813bbe/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864046 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.589605885s: [/var/lib/containers/storage/overlay/a9bd9dfdea98ef2edf04b5e6fdb6f4f2511584d85560b8d7b4fc03f1cebcdbdb/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864115 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.492935123s: [/var/lib/containers/storage/overlay/9290fdebb10ec6184251f4bc3fec6ca6e8aaac220cb0d2357e302ba0903899aa/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864156 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.434535163s: [/var/lib/containers/storage/overlay/544455d05e948c678e9321aba3a05f04715d1fa1c9027cbd1b364976113c6a61/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864387 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.314699149s: [/var/lib/containers/storage/overlay/b60f034683ac4979bc9c59cff567bcfa8432c8e6b6947059ba36c7a1cd5bbaea/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864436 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.233998373s: [/var/lib/containers/storage/overlay/d1e6ec92de9a4070d637db7fa5455102c02566ef659ed81e3b16d00640072282/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864474 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.139775817s: [/var/lib/containers/storage/overlay/7e54f05657acfcc6b1f083a9451f821b518312ecee59104fcb74afe75fe2b961/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864511 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.106273355s: [/var/lib/containers/storage/overlay/546a888796fa005ac41cc7f14435acb6d83f1dcd88db52cce5370fdfc8a6c5f4/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864550 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.10315488s: [/var/lib/containers/storage/overlay/00088994e6cde955e64a05ce88d4533cb6c090d1f10f732b2a649ce057308e2d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864588 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.964787292s: [/var/lib/containers/storage/overlay/f0a097b80f8e2b678a20c04fab90d25997c67dfaa763abe669dfcebe8e645b9c/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.865955 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.878541135s: [/var/lib/containers/storage/overlay/60069a51be73a0cb99bd4e84472d25e65c04a0df890d78960c1f3fd66aff499d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.866122 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.878709249s: [/var/lib/containers/storage/overlay/8afab9028bcf3faa4fec96b8bda6b018d150f65d67fa339dac000b1e35a62934/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.866986 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.68885507s: [/var/lib/containers/storage/overlay/98e77c4d41c7c8fb36a8201f4e75f9641399acced9ba6f1d0a65017a70b5c9e9/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.867301 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.433582429s: [/var/lib/containers/storage/overlay/c36e52060a010cc7ee760bb23428c1a31b9c7129d7b1db2463b0abf7ad7da8c6/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.867339 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.423345891s: [/var/lib/containers/storage/overlay/407243c2eac1c21dbc6fa86e56cf5b4bd4e1ccdc28e1f4e4fd9d55bcd149aa42/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.867670 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.234451148s: [/var/lib/containers/storage/overlay/37c478220050e7f0094ab3c30ea04da53a622cbd513ce9b06bcec11c2b6a6fc5/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.867717 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.145688101s: [/var/lib/containers/storage/overlay/e0a556b176b5258efbad9159a4937b4d295ad3a3e53993b5af751151ed0aef5c/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.867746 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.136402778s: [/var/lib/containers/storage/overlay/29d558f231eea70c696e9090a025082175ed6060d07c3d99d47dce0dc62c778c/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.859347 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 31.032914259s: [/var/lib/containers/storage/overlay/0b2b2d26a4279187b37613510fb7fb3e50a670e4cb34b4600d08c1d53200d38d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.881049 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.893536403s: [/var/lib/containers/storage/overlay/27266277745e360c87d4cba8ade7028d8b8986af0443d67cfbec31c79c8ec16a/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.884712 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.715654826s: [/var/lib/containers/storage/overlay/2b4fd5e994c133f6f65d633bcb711e449684819f08c00f986ce53673af9763a4/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.885340 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.337060667s: [/var/lib/containers/storage/overlay/fd72839ef09f08817dc7282e83f8a43ac4b551552ad1ad9bf095254e124c82d0/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.885679 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.93192849s: [/var/lib/containers/storage/overlay/be0411731bc7ea79f793d8a524a54245b033a93386843bfa9c2099dc772054a7/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.886027 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.092224342s: [/var/lib/containers/storage/overlay/188d1fd69426d7981cec0f8b8d457f62adc9a41c590a37ec054aff76eeeac69d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.886577 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.902474915s: [/var/lib/containers/storage/overlay/2618905e5fe18b4096178d07d84982ae644324a5d3618c31258644b30e153544/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.886649 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.844805005s: [/var/lib/containers/storage/overlay/416499134a7f3082083600d3174eb5aac4bdb3433572bc5f1ae007d14e5f45d2/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.893661 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.238736382s: [/var/lib/containers/storage/overlay/9438b11e0b2bd74c945987bdb1bd5be8f453609fa0e5f26e2e127f26f7807e15/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.894011 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.165999765s: [/var/lib/containers/storage/overlay/5a46771d875b47f0002e9fdba91593157f4da778a742ff0065d1984edc968e5f/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.894131 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.94035417s: [/var/lib/containers/storage/overlay/dd6b0b062e0cc4318ffb9ea83c1c1bd2c53bc7315d0841843449670d07ef9141/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913489 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.959281625s: [/var/lib/containers/storage/overlay/fd807807ab8970bc222446c2335342cd4f03695eb7c6e88b8625aacb5f3efec5/diff /var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913557 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.959355727s: [/var/lib/containers/storage/overlay/62bfe17f37de12d3e6c9ca61da34b7deab4ee04fa5765faae0df25a881edf326/diff /var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_3dcd261975c3d6b9a6ad6367fd4facd3/kube-scheduler-cert-syncer/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913585 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.959179363s: [/var/lib/containers/storage/overlay/d0196e4fab904821fe799dd39922f3ca8df3eb75110324fd0a9aa7a15728329a/diff /var/log/pods/openshift-marketplace_redhat-operators-mf97s_37b1b410-e1bc-4ea1-88c0-d4ee6390214b/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913614 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.95906599s: [/var/lib/containers/storage/overlay/2a96f52767ea4c7f476ef5550610b088237325fbb9dbab098a7cc69b076e32e1/diff /var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hrngk_61c58953-6280-4a68-858f-056eed7e5c65/nmstate-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913633 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.958976887s: [/var/lib/containers/storage/overlay/f818a295490dc54098e9f82eb3fdc0ec3bd26acc1122953c3527ff59ad00070b/diff /var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913653 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.958979718s: [/var/lib/containers/storage/overlay/f1690acd357b8fb4842f85e860bcaefd5d12100947bb41f15d9fd35a156b0dd3/diff /var/log/pods/openstack_cinder-api-0_340cac45-4a1b-404b-abf0-24e2eb31980b/cinder-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913682 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953738084s: [/var/lib/containers/storage/overlay/d465908c43d826617fa75590060c6e0bf8287722834a780da4323a389e4315e2/diff /var/log/pods/openstack_nova-metadata-0_89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06/nova-metadata-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913701 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953724684s: [/var/lib/containers/storage/overlay/741af280ef009db3197494aa7959cd691426d43de26095848fc68515238fabed/diff /var/log/pods/openshift-nmstate_nmstate-handler-srg8z_9460d049-7edd-4e18-a153-2b0bc3218a8a/nmstate-handler/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913720 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953631511s: [/var/lib/containers/storage/overlay/23a526882ce466762fd2c69b0427a551f51f054d64ac437c2a479347b6220c9b/diff /var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-fdf2j_5812c445-156f-48d3-aa24-130b329cccfe/nmstate-webhook/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913740 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953372595s: [/var/lib/containers/storage/overlay/964a75ea37b8f5a2f946157ac6e3e073c04934c48eea24b753f4f1d499ffc2e3/diff /var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_3dcd261975c3d6b9a6ad6367fd4facd3/kube-scheduler-recovery-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913762 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953387025s: [/var/lib/containers/storage/overlay/55f5784b116a980bde94491a025c0ad3814258c415bb2141fe58d32904db74de/diff /var/log/pods/openstack_cinder-scheduler-0_27acefc8-6355-40dc-aaa8-84029c626a0b/cinder-scheduler/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913784 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953213s: [/var/lib/containers/storage/overlay/a98588d35754d214a547b908cd12f0b3cb2f59831b999a54c10235e7520642e8/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcd/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913810 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953221761s: [/var/lib/containers/storage/overlay/b868a55cf3253cedf566b84dabcb52d2040ab82eea7d1eb32beef0bf5554519b/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/ovn-acl-logging/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913855 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953122708s: [/var/lib/containers/storage/overlay/3f2685e73d868406db61e4b01961b0cc5659e6004f807ba2d180ee1963239c2e/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913890 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953150379s: [/var/lib/containers/storage/overlay/a205dd171107dce3e7240bdfbfc2dfb9d082b84f73a6ebc42478570cb3911dd1/diff /var/log/pods/openstack_rabbitmq-server-0_c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a/rabbitmq/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913911 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952490062s: [/var/lib/containers/storage/overlay/e0f9744002c636ac6c733dd35757b1ff57ba83abbb1034927d9bab621c95ea25/diff /var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_3dcd261975c3d6b9a6ad6367fd4facd3/kube-scheduler/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913933 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952284396s: [/var/lib/containers/storage/overlay/d9f5603f21420c2eed2dcd36d06af9785be65ee1f7afe5d38bf37d7064ea98d5/diff /var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/nmstate-metrics/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913956 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952306527s: [/var/lib/containers/storage/overlay/c2415a48ceb853479cace00e873c5248e02ef518a6c447309f7c2b5b4ceaa7f2/diff /var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7nprl_d1e5428b-c7db-4df9-8fad-fcfa89827ea4/nmstate-console-plugin/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913978 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952318076s: [/var/lib/containers/storage/overlay/921ea6d74100521105ca9e7f3ae85f5119d0ff0eb21fee2509474232e195b3b7/diff /var/log/pods/openstack_manila-scheduler-0_95d74824-f3a9-4fbb-8ca6-1299ef8f7153/manila-scheduler/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914012 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952318756s: [/var/lib/containers/storage/overlay/ec81531694eb42c4e9714a7ac738070a0e436ee29c3542ce93dacde422fad28e/diff /var/log/pods/openstack_nova-scheduler-0_a2569778-376b-41fc-bdca-3bb914efd1b1/nova-scheduler-scheduler/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914035 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952308087s: [/var/lib/containers/storage/overlay/29b411669afa1386b3f7350543dfa8f0b4a1d685f8f038525b5b29edcdae1b18/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager-recovery-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914060 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952314996s: [/var/lib/containers/storage/overlay/b0f66f5c0679c0458bc1037c9ac279df3d393c19182f447a09cf94f32992b5e5/diff /var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d1b160f5dda77d281dd8e69ec8d817f9/kube-rbac-proxy-crio/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914082 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952329877s: [/var/lib/containers/storage/overlay/0213c4c1e4d4b5b6564e344a9d5cecbbd51d00ee6d9f2e92711cce4dfc2ae4f2/diff /var/log/pods/openstack_nova-api-0_09a86707-0931-4a2a-961c-6109688ed7e0/nova-api-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914103 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952324427s: [/var/lib/containers/storage/overlay/896ff3b1a2a9f6044c7919453fce639e3fe631f8d96994248ee906c0ebe0f768/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcd-rev/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914138 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952343698s: [/var/lib/containers/storage/overlay/62142d9dbd670a8a2e5cc6fc2a674280e318faa7e4482a6bf70323f3324e4397/diff /var/log/pods/openstack_horizon-97dd88d6d-7bgrq_cdecd60b-660a-4039-a35b-29fec73c85a7/horizon-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914144 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952259806s: [/var/lib/containers/storage/overlay/817b029f5d431eb956b766301cf0b454af3df07b69d61d9dde511e57998e9038/diff /var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/sg-core/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914190 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952277626s: [/var/lib/containers/storage/overlay/ea13046701c8b9367305912aadcdc525a87e4d506ae3902cafbfd064b90ccd93/diff /var/log/pods/openshift-marketplace_certified-operators-s5s9m_67b842e6-f082-4d40-8e57-620003b6cc52/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914212 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952292226s: [/var/lib/containers/storage/overlay/e57b319790b3f4154378d9a89c200958bfc47e7f840fbc619968e39726a2be16/diff /var/log/pods/openshift-marketplace_redhat-marketplace-vpz9t_87b35465-41de-46cd-acdb-53b8c6bace46/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914160 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952248945s: [/var/lib/containers/storage/overlay/0f165253f02a9a9d347f4d2ad621a446986edb07edeed0c844ebcb5948b385a2/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914233 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952251075s: [/var/lib/containers/storage/overlay/412a7e6a8f2dc9d2d8e20eef3184e4ef2ab70084fe09ab31e6c4b51b5e69f2a2/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcd-readyz/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914247 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.95211691s: [/var/lib/containers/storage/overlay/bf320082fb295f99b636e1b881003d348452deb87a8dabc50b3ac32ffa327292/diff /var/log/pods/openshift-console_console-7f9d58689-7z254_53004a12-f1d2-4468-ac01-f00094e24d56/console/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914266 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952024028s: [/var/lib/containers/storage/overlay/c6f8d146292dfe0fcf95bbdbd2acb6a5701968983db1c50282339289c7e02b3b/diff /var/log/pods/openstack_nova-api-0_09a86707-0931-4a2a-961c-6109688ed7e0/nova-api-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914282 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952033018s: [/var/lib/containers/storage/overlay/3c0f4b5bc273ea8e3dacd67977959bf50c1cb9795d0a9d401e7f1da022aa7a69/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/ovn-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914300 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.951028222s: [/var/lib/containers/storage/overlay/fd6daeb7c843a95d68eabd429e5a869630db7f2fed13867c1b5695eda1f6842d/diff /var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914318 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.950856405s: [/var/lib/containers/storage/overlay/138f75b090e35c8396d8f24452a63e3902367c8bb4705b30a3f11d970633b676/diff /var/log/pods/openstack_glance-default-external-api-0_82cfddd4-081e-4b33-82e2-5dbd44a11e56/glance-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914314 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952361058s: [/var/lib/containers/storage/overlay/5db496dc9cc198037ea807094af967b8d0a92d8506dd6cb312b8e85aac413993/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914335 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.950850116s: [/var/lib/containers/storage/overlay/69842ce0e31e54074f5a268641d37fe56d06c0c0f9932387c46faa9190cc1342/diff /var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914348 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952084659s: [/var/lib/containers/storage/overlay/ea910cd65056347531897b577c4a7a62347bd4797266100e9ec93623a80536bb/diff /var/log/pods/openstack_nova-metadata-0_89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06/nova-metadata-metadata/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914354 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.950858706s: [/var/lib/containers/storage/overlay/e1915d5adcbb6fb849556c82c9241eb389217ce73f48eb53937b0175ad1f6cff/diff /var/log/pods/openstack_glance-default-external-api-0_82cfddd4-081e-4b33-82e2-5dbd44a11e56/glance-httpd/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914366 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.950300351s: [/var/lib/containers/storage/overlay/b06efc9b65fa79f2dbd62ba006d5f370c9827667e13c69f09f4d284e66da6de3/diff /var/log/pods/openstack_cinder-scheduler-0_27acefc8-6355-40dc-aaa8-84029c626a0b/probe/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914379 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.950369753s: [/var/lib/containers/storage/overlay/fcad9e4be8c81260acacf01b1e4fcbb7b7d2bfd8e548d2c6a06ae28f9fe28259/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager-cert-syncer/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914384 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.949905761s: [/var/lib/containers/storage/overlay/dbea827120443de2b8e12d78db03f7ee5da19a852487919918c09fc56e2c6ebe/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcdctl/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914398 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.949878089s: [/var/lib/containers/storage/overlay/9419c901b837be6b1a96b56757b002db3d748c4d63bdb2bd52ac4a705aa37aba/diff /var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/proxy-httpd/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914410 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.949906061s: [/var/lib/containers/storage/overlay/d41a475881a5884ece44893d6c6581faf339512ca0cebc8079652cb5372a85b7/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/nbdb/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914417 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.94988499s: [/var/lib/containers/storage/overlay/6d81502d9d20a1fbe78ae63263fd9259b5fedbbd7d7b99d0b4ebece9684ae632/diff /var/log/pods/openshift-marketplace_community-operators-2phqw_730d76de-628a-49ea-ad88-87a719e76750/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914429 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.949365015s: [/var/lib/containers/storage/overlay/9a016f7d3bea016f91b38a6a0f145637079346e16cb4bb333441167ac4dc3806/diff /var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914446 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.960806187s: [/var/lib/containers/storage/overlay/9bb105a6e14e18029fa733928d15e59646617fddb759414644eb3e83b407f51a/diff /var/log/pods/openstack_barbican-api-7c6c95c866-nplmh_08457213-f4e0-4334-a1b0-a569bb5077ba/barbican-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914449 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.94913792s: [/var/lib/containers/storage/overlay/4dc0b09098c12e04d251b6f2ef1a95cf5518c33f0471f075349c6332c88ecb44/diff /var/log/pods/openstack-operators_openstack-operator-index-ggtdm_50c62dc2-9ca0-4c34-9043-e5a859e7d931/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914477 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.949435208s: [/var/lib/containers/storage/overlay/50a55be008beefd695ec3d785a297636edfb423851128f024829f71bc704399e/diff /var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/ceilometer-notification-agent/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914501 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948990025s: [/var/lib/containers/storage/overlay/8b87cfbb72f7657c092811b88be9a87dae853f23e897f30726cdf2a23b05208e/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914523 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948744779s: [/var/lib/containers/storage/overlay/1811867ec6ebc010121ffcbc15f987b1efb4a7a684e40534a1c664610c0d5872/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/kube-rbac-proxy-node/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914543 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948765269s: [/var/lib/containers/storage/overlay/989d3b544bd1e3b430b64c641267aab6fbfe00aa5cc79659cedfe70769d06abf/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/kube-rbac-proxy-ovn-metrics/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914562 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.94878288s: [/var/lib/containers/storage/overlay/40df50113a848e211643ce01a31c20c05c78f4f0a7fff581a143015618901a59/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/northd/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914566 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.960891609s: [/var/lib/containers/storage/overlay/03434b1e7ffc2ad11f54d9843f530e2e768d8538e59cb9d06bde051f2870caa5/diff /var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914581 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948730578s: [/var/lib/containers/storage/overlay/42edeff4ab5f6a10526da0e6d8906d75416062121cd06f4c390e5ea567ec8138/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/sbdb/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914585 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948682427s: [/var/lib/containers/storage/overlay/31fe65fdde3d2504008d950c6d26c45ab5c98606475b104492bfb53e087bed04/diff /var/log/pods/openshift-marketplace_marketplace-operator-79b997595-28ff6_f61fadad-2760-4a0f-8f1c-58598416d39a/marketplace-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914601 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948617916s: [/var/lib/containers/storage/overlay/1c28c79178e70fbb2f54dae1f21b0cb474f2be1261e54634e3469e5528debdfb/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-cert-syncer/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914603 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948691478s: [/var/lib/containers/storage/overlay/2f4df915f14b62090d0795f86db4bc6a255450838dc5ab07391483c063afa402/diff /var/log/pods/openstack_manila-share-share1-0_9af8a439-bfea-4aff-a10f-06abe6ed70dd/probe/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914620 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.238314149s: [/var/lib/containers/storage/overlay/82780da71a0312889260528ec61ee34764a89b5cd283b4dc84fba96bc5b07e72/diff /var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914624 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.51307408s: [/var/lib/containers/storage/overlay/071e10d4be0efb5018c114ad13fd42c9004e37c18ef8352bde13d4ab7c142773/diff /var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/webhook/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914637 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.960962041s: [/var/lib/containers/storage/overlay/1da32ffa3f25a41eeba7b29fd5a2777f9ce4ff8b5d419227a1dbb33439609de2/diff /var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914650 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.232200303s: [/var/lib/containers/storage/overlay/771dc896a468e906ec589a4f20e16f226b72be9e1f52a8cbd776648be126b36b/diff /var/log/pods/openshift-machine-config-operator_machine-config-daemon-xlqds_27db8291-09f3-4bd0-ac00-38c091cdd4ec/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914657 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.19097266s: [/var/lib/containers/storage/overlay/9df0b1a5ad0ad269b83382b745aa54d64447fa8c4308d5e4d09bc0ab8f967462/diff /var/log/pods/openshift-network-operator_network-operator-58b4c7f79c-55gtf_37a5e44f-9a88-4405-be8a-b645485e7312/network-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914681 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.190989641s: [/var/lib/containers/storage/overlay/7fbd48f240676ad3e837e6c57e06c7bf12395f94101880a86e53a3fd80978670/diff /var/log/pods/openshift-dns_node-resolver-ppn47_e1b5ceac-ccf5-4a72-927b-d26cfa351e4f/dns-node-resolver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914681 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 30.929497363s: [/var/lib/containers/storage/overlay/44b0f4b868868521f89812cf72be1c47e2af3c3d35b2f42e6b1ce84cd508ba66/diff /var/log/pods/openshift-machine-config-operator_machine-config-daemon-xlqds_27db8291-09f3-4bd0-ac00-38c091cdd4ec/machine-config-daemon/15.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914697 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 30.879304136s: [/var/lib/containers/storage/overlay/3f81f7bdc8039f42c2187b0b809dd597ad71379f7f426470b905d09c1b74d09a/diff /var/log/pods/openshift-network-operator_iptables-alerter-4ln5h_d75a4c96-2883-4a0b-bab2-0fab2b6c0b49/iptables-alerter/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914710 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 30.909403836s: [/var/lib/containers/storage/overlay/489ea5af544451a8c623e609102169638c79f6a970e79d7acd677452bdcef2c6/diff /var/log/pods/openstack_keystone-755fb5c478-dt2rg_5e665ce5-7f58-4b17-9ccf-3e641a34eae8/keystone-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914720 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 30.096385989s: [/var/lib/containers/storage/overlay/e40423c2a72cfa61b3ae4e60585b6f4ca55113630ff18b28e5833f7e6c7f10d6/diff /var/log/pods/openstack_placement-7bc6f68bbd-rrpp7_ba66d45b-42e9-4ea8-91dc-9925178eaa65/placement-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914757 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 29.748333762s: [/var/lib/containers/storage/overlay/8b98d2319704238b54feac1eaae15811617025e926528976a8cee47c93663674/diff /var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-52ckg_2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914779 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 29.115184593s: [/var/lib/containers/storage/overlay/12d5298b677ab77dbac965aaedbd7b6ff9cd970602ddf0bb5a813809452c9b2e/diff /var/log/pods/openshift-image-registry_node-ca-8zn2s_4f22c949-cafc-4c90-af3b-a0c01843b8c1/node-ca/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914803 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 28.321054181s: [/var/lib/containers/storage/overlay/95760442af7efa52e7bbb288e0bb6eaf6bbbc4e0160f8f9ed95bc13f677cb532/diff /var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-52ckg_2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4/machine-approver-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914847 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.931470883s: [/var/lib/containers/storage/overlay/74370470126ed1ffaa762024ccddb85e137995172f42a43e3919b28c9ff9058f/diff /var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/3.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914870 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.599990208s: [/var/lib/containers/storage/overlay/a20482603cff179ce5b970a4072116f3d4adc435c58e684bc6d0a54499a0f609/diff /var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-hjpnm_e4636c77-494f-4cea-84e2-456167b5e771/01c2bc965f742c15303300d45b0194248b00aaa0b99f54fdb6551133db57141b.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914894 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.443743903s: [/var/lib/containers/storage/overlay/9597b497744ec1e2282bbb776b5c2284bd6c15477da588dd8f280094cedaee88/diff /var/log/pods/openshift-console-operator_console-operator-58897d9998-gw4z7_04cf092e-a0db-45c5-a311-f28c1a4a8e1d/console-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914917 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.961237008s: [/var/lib/containers/storage/overlay/2686629fdecf63837c52f5d6cd19c37e88f4d43be2f4175ab138e85587664c9a/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/ovnkube-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914942 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.872233972s: [/var/lib/containers/storage/overlay/d3a839782e09db7744fcdb5e9be20e2fdf487e02f6a10f6d9c470422801406fb/diff /var/log/pods/openshift-authentication-operator_authentication-operator-69f744f599-mrnp9_03c04a1d-2207-466b-8732-7e90b2abd45a/authentication-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914964 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.404847146s: [/var/lib/containers/storage/overlay/e30334793fcd4d6a4f95f74e9ee0fbf18de0364ffd5a27f05ca1f75bb4bc7c4d/diff /var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-t985g_f99aadf5-6fdc-42b5-937c-4792f24882ce/olm-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914954 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.94952323s: [/var/lib/containers/storage/overlay/1a3b508b788564de23dcfb338760a5b3c3fa19b31a00d06652e4dbe1027c6673/diff /var/log/pods/openstack_manila-scheduler-0_95d74824-f3a9-4fbb-8ca6-1299ef8f7153/probe/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914996 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.021160009s: [/var/lib/containers/storage/overlay/93cd3b5f2adb0b196e6ea1ae5ae7ab86054c1b8396c37accea48180da40fa501/diff /var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-lvklm_c3e32932-afd4-4e36-8b07-1c6741c86bbd/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915005 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.009962735s: [/var/lib/containers/storage/overlay/7c462312be49c5484d0acca2a5cdd1225a75ea1945bec70791d127ee9df6d3d6/diff /var/log/pods/openshift-machine-config-operator_machine-config-server-jcttp_41a5775c-2a4c-43f6-869c-9fb214de2806/machine-config-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914979 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 30.96392323s: [/var/lib/containers/storage/overlay/742e410970040a1f97f26e7ec1d73455cdfc87932c0048ff22365d213dc15ba1/diff /var/log/pods/openstack_placement-7bc6f68bbd-rrpp7_ba66d45b-42e9-4ea8-91dc-9925178eaa65/placement-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914987 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.397730672s: [/var/lib/containers/storage/overlay/5afedf064f77acecaa6d54eab90aeb0c3efeff89d75a9f048d68743a445d0cea/diff /var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-xw8w7_7b7d9bcd-b091-4811-9196-cc6c20bab78c/catalog-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915004 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.990787293s: [/var/lib/containers/storage/overlay/ad28abc4afb3f1a8137560d6d6a3047cd2012984853352017a3c3b5a29e0219f/diff /var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-hjpnm_e4636c77-494f-4cea-84e2-456167b5e771/cluster-samples-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915031 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.952383066s: [/var/lib/containers/storage/overlay/4eb3f4d7e62c8c0e2b73bf5fdfc8ba04138b275955b657b5dc41a4df8c03c158/diff /var/log/pods/openshift-etcd-operator_etcd-operator-b45778765-qqgkc_348f800b-2552-4315-9b58-a679d8d8b6f3/etcd-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915024 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 29.831378134s: [/var/lib/containers/storage/overlay/2d7a54ebeb95779572fef7d7af8138300105afdc82b64b4f721e0936319fbc62/diff /var/log/pods/openstack_tempest-tests-tempest_156e0f25-edfe-462a-ae5f-9f5642bef8bb/tempest-tests-tempest-tests-runner/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915041 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.754230722s: [/var/lib/containers/storage/overlay/162e6c4c1da2161c74db8019faf33efccb7e5b619496a683237c698f261dab8c/diff /var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/55a56bfc3731242b6805a1b12acb9ab95fdb4491974ffaf7b15df0079577d50a.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915109 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.197940614s: [/var/lib/containers/storage/overlay/db6a4f0803d0970c9a4c31c035e6044779fd31ff1141ca0b18275ac38813d9d6/diff /var/log/pods/openshift-ingress-canary_ingress-canary-796x7_82e0a5a3-17e1-4a27-a30a-998b20238558/serve-healthcheck-canary/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915051 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.761410128s: [/var/lib/containers/storage/overlay/ec1692b5b97ee07b4786b98288fc65f82fe4d3a7f6c05b5f862c35278bbbebf6/diff /var/log/pods/openshift-dns-operator_dns-operator-744455d44c-k4fwk_97e7a4a3-f7f2-4059-8705-20acd838d431/dns-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915057 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.283140415s: [/var/lib/containers/storage/overlay/4487f95bc84f0f1ca271eb666dd43ec9fe46f1b7cf96f5012aff8a51f3c7456d/diff /var/log/pods/openshift-machine-config-operator_machine-config-controller-84d6567774-4r9td_ad0a47df-29cb-4412-af60-0eb3de8e4d00/machine-config-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915075 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.247399332s: [/var/lib/containers/storage/overlay/da99decbffdb4107c1cfeb2d493a270f09666dafa8cc07b8fa03c6810d04da36/diff /var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915068 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.247415032s: [/var/lib/containers/storage/overlay/5eccc4655197e9884485e2bfa25c70fcfb8d3fb65ccd4a83570cbb52cedc004d/diff /var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-b67b599dd-w6vhs_77b5b7f5-050a-4013-9d21-fdfae7128b21/kube-storage-version-migrator-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915094 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.204093243s: [/var/lib/containers/storage/overlay/2e1d006af5451f32294c4f0019a843111f4443b0b9f8aa57a4be3e7f3515c9f0/diff /var/log/pods/openshift-machine-config-operator_machine-config-operator-74547568cd-86gpr_635cd233-be60-44f6-b899-1d283e383a5f/machine-config-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915134 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.658546576s: [/var/lib/containers/storage/overlay/887593c65996a085a41df11356aa68ce76e4e2c5c1c574f34f653d036240ce2a/diff /var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78b949d7b-kt4bq_eb2e8f4d-c66b-4476-90fe-925010e7e22e/kube-controller-manager-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915152 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.961478906s: [/var/lib/containers/storage/overlay/042b6815b7cfd357d60b3f2f6b9e77c089cc7dfa25a4abd4ee16a8bc21ea34fe/diff /var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-685vd_ef6a19dc-ef35-4ea2-9b8d-1d25c8903664/control-plane-machine-set-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915169 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.197994426s: [/var/lib/containers/storage/overlay/12a0ce61b9f15bafa269bf3354e778aaa0470eaf2fa744e0ea2a18eae0f23426/diff /var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-796bbdcf4f-lws9b_e389a6f6-d97e-4ec0-a35f-a8c0e7d19669/openshift-apiserver-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915165 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.74453915s: [/var/lib/containers/storage/overlay/ed724ffef1163fed77b99143f48912a8df99354fcfdd808b0f23816aacb5d70e/diff /var/log/pods/openshift-operator-lifecycle-manager_packageserver-d55dfcdfc-j9qnr_114b5947-30d6-4a6b-a1c6-1b1f75888037/packageserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915200 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.659972288s: [/var/lib/containers/storage/overlay/7467c79ce30f2e3f1ec2df8ea1b6bd15bd1941d27aef357b5c1a248026083591/diff /var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915187 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.659964707s: [/var/lib/containers/storage/overlay/1ed6dbac91f31e9f00b5704662f7217082c9d8d2d8ce698c3f91b6cfedbf7788/diff /var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/machine-api-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915216 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.659995548s: [/var/lib/containers/storage/overlay/1029b80120e94da4a9447e16523f8ddc39583232ef797dea52424ed1e59a022c/diff /var/log/pods/openshift-image-registry_cluster-image-registry-operator-dc59b4c8b-nzpf7_35c2a5bd-ed78-4e28-b942-2aa30b4bb63f/cluster-image-registry-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915224 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.659994798s: [/var/lib/containers/storage/overlay/9affd49cb067c1cb26d213c2487c45389e2276e1b080afbe8a78ae32e0c58716/diff /var/log/pods/openshift-service-ca-operator_service-ca-operator-777779d784-zfmlf_52aa9f8a-6b89-442e-b9a2-5943d96d42fc/service-ca-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915235 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.660005218s: [/var/lib/containers/storage/overlay/80c35078f8e72fac7d5779f870b614f2122886c9217eca4a3f259356f8b5408e/diff /var/log/pods/openshift-kube-storage-version-migrator_migrator-59844c95c7-bfg4d_e70b8e17-5f05-452a-9216-7593143eebae/migrator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915242 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.65894252s: [/var/lib/containers/storage/overlay/e768606b2cf0f0e8011dfb72a07f225aa4fd05e16827960db317e2d722de1757/diff /var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-756b6f6bc6-rt85v_e1f7a893-ca61-4fee-ad9d-d5c779092226/openshift-controller-manager-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915252 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.65894717s: [/var/lib/containers/storage/overlay/d4b1c0993f3b1f369dd77026714dfd87a19e4eca7b03d1664baa99242340fa62/diff /var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-wj45p_59bd4039-f143-418b-94d6-8fa9d3db77f5/multus-admission-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915259 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.610612584s: [/var/lib/containers/storage/overlay/bbabb27b03e9b1959c256acee13ffc8dc88ecf75adb53f333798329fa6ae13d5/diff /var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-lvklm_c3e32932-afd4-4e36-8b07-1c6741c86bbd/package-server-manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915271 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.60639063s: [/var/lib/containers/storage/overlay/869401e76157a52d5927146ec99c531a2469cadabe88eebe8dbe2d69d356fa03/diff /var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5fdd9b5758-624qq_f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82/kube-scheduler-operator-container/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915277 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.981919292s: [/var/lib/containers/storage/overlay/1bacabe8459e8b7583ca8b70f07630cdc0277f21226c711b05d1791e0c045f5f/diff /var/log/pods/openshift-console_downloads-7954f5f757-xfwnt_be284180-78a3-4a18-86b3-37d08ab06390/download-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915296 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.384123577s: [/var/lib/containers/storage/overlay/f068ef37c377e88364cf9797bfc9203fb2398eb60369268c94be86b57a54240d/diff /var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-hjpnm_e4636c77-494f-4cea-84e2-456167b5e771/cluster-samples-operator-watch/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915289 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.531611053s: [/var/lib/containers/storage/overlay/1e222a3dc1856e7c31529f8bacb990126a29dc2590767455965930c0b15c0799/diff /var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-766d6c64bb-mzpcf_c678179e-9aa8-4246-88c7-d0b23452615e/kube-apiserver-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915321 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.022478581s: [/var/lib/containers/storage/overlay/7a5a5af5ae904e00b0f0062da342e6236614d4f9c8052e16b0e50172de0c8fd9/diff /var/log/pods/openshift-machine-config-operator_machine-config-operator-74547568cd-86gpr_635cd233-be60-44f6-b899-1d283e383a5f/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915343 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.99637627s: [/var/lib/containers/storage/overlay/471062e27acc5c3b084339c0af2a1e95000cb788274c9526ed460557984b27b7/diff /var/log/pods/openshift-ingress-operator_ingress-operator-5b745b69d9-d8mf9_4d3373de-f525-4c47-8519-679e983cc0ba/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915344 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.112151463s: [/var/lib/containers/storage/overlay/f1730caf7105d60bbcff9cc4a7da6ba336c98473e0b71b446426ff588c16eac4/diff /var/log/pods/openshift-machine-config-operator_machine-config-controller-84d6567774-4r9td_ad0a47df-29cb-4412-af60-0eb3de8e4d00/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915363 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.650296087s: [/var/lib/containers/storage/overlay/9f94c8a826a42ce055a78a1b3a327369aa5363dc9e4cd66c04fb2e2eee4d3b79/diff /var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-g47s4_93e52f9b-f4a8-41b8-ba57-2dbbe554661f/openshift-config-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915392 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.438533491s: [/var/lib/containers/storage/overlay/e0da933a9a0e7819e8b0ed1c5e871efb77a36eaef32f8f5ae4a368d984ebac7b/diff /var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915374 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.966111155s: [/var/lib/containers/storage/overlay/21a27c94dda08da40fd45c6933a9d9919e1dcda005fe07cbb3a293e515fa761d/diff /var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/speaker/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915438 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.389450114s: [/var/lib/containers/storage/overlay/a40903b7a4f583c5758eb1d2031a89a0f64178691d9249c41950e0b456839fa0/diff /var/log/pods/openshift-dns_dns-default-xg9nx_61310358-52da-4a4b-bcfd-4f68340d64c3/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915439 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.556997347s: [/var/lib/containers/storage/overlay/631a0d50f9717d02da609f8be14ceb46f9a52d60d0a860495b67d8c85480a07d/diff /var/log/pods/openshift-kube-storage-version-migrator_migrator-59844c95c7-bfg4d_e70b8e17-5f05-452a-9216-7593143eebae/graceful-termination/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915459 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.389465095s: [/var/lib/containers/storage/overlay/2aee9ba5729d8e59176ce265bd091951489eee6dcba20e1a533befaacfe838ea/diff /var/log/pods/openshift-dns-operator_dns-operator-744455d44c-k4fwk_97e7a4a3-f7f2-4059-8705-20acd838d431/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915476 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.180919096s: [/var/lib/containers/storage/overlay/18f65f4b4e642bcf76d9c21edcf97486865b0fc98af755735ca78ccf69a0ca4f/diff /var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915479 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.180922246s: [/var/lib/containers/storage/overlay/bf957692daeeff7c60c8efe1522f00169592c4b7045108737c475539050ca4c4/diff /var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915492 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.15455882s: [/var/lib/containers/storage/overlay/8f7e6bb545a99f682261f31b5e0c2c80abb915390873f772c863d0cb40939eff/diff /var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915505 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.172727164s: [/var/lib/containers/storage/overlay/4d0ccc33a2301fd18682bd9ba8279b10a80ecdf057427914c4e546d2ce99995f/diff /var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915510 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.1545736s: [/var/lib/containers/storage/overlay/640af31364525c5eddf3904dcd119525f2a17afd409407a17153ff81a50eca81/diff /var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915524 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.1545846s: [/var/lib/containers/storage/overlay/220e1c2447ec8f446e4610d2569251286079dc9656c511067cb7fd4970698f22/diff /var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915535 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.154596321s: [/var/lib/containers/storage/overlay/b2e631c758618e7c1db61f0d5573090828e710811a9f8eafaeece16b4cc6982e/diff /var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915565 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.080106891s: [/var/lib/containers/storage/overlay/44ef5597da71a4b834cb9e6d1d5438f0f696128972149fd78596f4688498b28a/diff /var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915566 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.088249773s: [/var/lib/containers/storage/overlay/540a86c6805e5052d10f7534636f58da829c985ac0e7bf33275eafa523b40c35/diff /var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915593 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.094670398s: [/var/lib/containers/storage/overlay/5337c9504ecc438fb625c667f57434403ff9d101dcb741bedc26941dcd43ba13/diff /var/log/pods/openshift-network-console_networking-console-plugin-85b44fc459-gdk6g_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/networking-console-plugin/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915609 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.069005419s: [/var/lib/containers/storage/overlay/ecb31c1337159ce25de6cb7696e2c3e3c898d9497a8245edc8f1136a90486e07/diff /var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915614 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.985748523s: [/var/lib/containers/storage/overlay/7bc804443ef787354e6ef5477daccd02ee94463aeaafdb762cf7bdf501314342/diff /var/log/pods/openshift-network-diagnostics_network-check-target-xd92c_3b6479f0-333b-4a96-9adf-2099afdc2447/network-check-target-container/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915628 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.931522146s: [/var/lib/containers/storage/overlay/75dd0dead6312cf8b488bd0bf5574839af71430dcb8980d05fa94fde66bdded1/diff /var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-wj45p_59bd4039-f143-418b-94d6-8fa9d3db77f5/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915645 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.772919668s: [/var/lib/containers/storage/overlay/1251390938ad212fd79060963b8c52ef11d4439ce2471cb89afc7a6716ee1153/diff /var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915647 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.198485821s: [/var/lib/containers/storage/overlay/14d78cb78b2e0447a7fec232eca46b00ef621a6d35c1ebc65e8d796deb52a751/diff /var/log/pods/openshift-ingress-operator_ingress-operator-5b745b69d9-d8mf9_4d3373de-f525-4c47-8519-679e983cc0ba/ingress-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915652 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.959841946s: [/var/lib/containers/storage/overlay/dc7d14238c068294ef66877fd861d36187036dbdfef0c0b54b8684f3d1442a7c/diff /var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915665 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.660446511s: [/var/lib/containers/storage/overlay/7b8348219f7d6420dcd11c96211caecc216e7b883306350684b781905ccc18f0/diff /var/log/pods/openshift-service-ca_service-ca-9c57cc56f-lzrxp_aa3cda86-5932-40aa-9c01-3f95853884f9/service-ca-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915682 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.188752753s: [/var/lib/containers/storage/overlay/c237599bdbf0739e33acc5df385a6c717fd59507ac284d93782fe5f6905635ff/diff /var/log/pods/openstack_ovsdbserver-sb-0_2126ac0e-f6f2-4bfb-b364-1ef544fb6d72/ovsdbserver-sb/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915698 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.937474749s: [/var/lib/containers/storage/overlay/0f16542c2c55ec1a1cf3076e6ced11078fff89ebd667222fc114aac2ea033796/diff /var/log/pods/openshift-ingress_router-default-5444994796-hm72p_c3085f19-d556-4022-a16d-13c66c1d57d1/router/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915698 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.77082356s: [/var/lib/containers/storage/overlay/befded20c9d50710013d115a3565efc1e9d313b27bf4c20c4e3565116d2a1647/diff /var/log/pods/openshift-oauth-apiserver_apiserver-7bbb656c7d-ql4qj_e7cd1565-a272-48a7-bc63-b61518f16400/oauth-apiserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915707 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.188769404s: [/var/lib/containers/storage/overlay/ca3d50f7bc17c9d09a4caf803f81ac5b0aed1f87e7bb8b9bf09dff3edce762a7/diff /var/log/pods/openstack_ovsdbserver-nb-0_3651185e-676d-492e-99cf-26ea8a5b9de6/ovsdbserver-nb/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915718 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.180546131s: [/var/lib/containers/storage/overlay/b6f3ec5c54c9a9407cfddb8e8e5b0f709489378fd19592c4625603811058233e/diff /var/log/pods/openstack_memcached-0_aa850895-9a18-4cff-83f8-bf7eea44559e/memcached/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915723 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.046736688s: [/var/lib/containers/storage/overlay/cda1a702e14c678238159f15523a757a45c9263777e0f6340b7d047bae614cc7/diff /var/log/pods/openstack_openstackclient_8f733769-d3f8-4ced-be3b-cbb84339dac5/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915738 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 20.962326869s: [/var/lib/containers/storage/overlay/782e1a12310c5c77e30561796ce5274320c229434407b58f18695a97a07d9068/diff /var/log/pods/openstack_nova-cell1-novncproxy-0_52afdd4f-bb93-4cc6-b074-7391852099ee/nova-cell1-novncproxy-novncproxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915829 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.731448918s: [/var/lib/containers/storage/overlay/6df8e5055cdc030e6a8f4a51af52505a5aa4b9ef01a1fcd733de9c1608af2e92/diff /var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915957 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.792463s: [/var/lib/containers/storage/overlay/78e8ceb06717feba2fbeea800d71d64c244dd31a0b043001722a05c3f1d0f8ac/diff /var/log/pods/openshift-apiserver_apiserver-76f77b778f-jbgcq_079963dd-bb7d-472a-8af1-0f5386c5f32b/openshift-apiserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915959 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.130300282s: [/var/lib/containers/storage/overlay/e533e8900bcc8d14ead2a1e72eac407e9d45c105e9fd34f4031c34e1b9101700/diff /var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/kube-multus-additional-cni-plugins/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915967 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.584464861s: [/var/lib/containers/storage/overlay/91470f2e013ace35431bde9aca4943352e8bfc6c65f11ce1096eaac946807400/diff /var/log/pods/hostpath-provisioner_csi-hostpathplugin-p994f_0bdb427a-96c7-4be9-8d54-c0926d447a36/hostpath-provisioner/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915988 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.936148876s: [/var/lib/containers/storage/overlay/14b0195e5aaa9aa48044fda968c96aa4c35cc4b478a398781434d52d64906486/diff /var/log/pods/openshift-dns_dns-default-xg9nx_61310358-52da-4a4b-bcfd-4f68340d64c3/dns/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915996 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.504451849s: [/var/lib/containers/storage/overlay/4cdb4956b37089067a75611361160836691e6e1c6a7bb8c02dd1b92e2dc9b966/diff /var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.916530 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952008198s: [/var/lib/containers/storage/overlay/f6305a7872dda7f491e6f50930adfb85c18487f630ffca41be866f1a43f6b00e/diff /var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.919778 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.95427101s: [/var/lib/containers/storage/overlay/4ea3203dd71416b833cb63fe515afbd7bae5ad6c342f56fc6ee97245e9ea187e/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcd-metrics/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.932129 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.97817487s: [/var/lib/containers/storage/overlay/d3ca4646fa391b4dfa93917c6093931e145b8397ebab8df67acc861035668e25/diff /var/log/pods/metallb-system_metallb-operator-webhook-server-6994698-z27sp_ef7118ff-ea20-40ec-aa4d-5711926f4b6c/webhook-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.949354 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.995277416s: [/var/lib/containers/storage/overlay/7faaf79ad10a1e651cbbb47b5dd69c5803d7acecfbe4265031004fd4b94066fb/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.949442 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.995365338s: [/var/lib/containers/storage/overlay/e08129f834fb61b115d9669b15f4a7d4d451dfe1d7ee66637df301561a6aeeda/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.956405 4739 trace.go:236] Trace[1283227927]: "Calculate volume metrics of kube-api-access-mr8bh for pod openshift-service-ca/service-ca-9c57cc56f-lzrxp" (21-Jan-2026 16:39:31.931) (total time: 35024ms): Jan 21 16:40:06 crc kubenswrapper[4739]: Trace[1283227927]: [35.024392799s] [35.024392799s] END Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.956976 4739 trace.go:236] Trace[532740184]: "Calculate volume metrics of run-httpd for pod openstack/ceilometer-0" (21-Jan-2026 16:39:31.928) (total time: 35028ms): Jan 21 16:40:06 crc kubenswrapper[4739]: Trace[532740184]: [35.028617534s] [35.028617534s] END Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.957589 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.021381907s: [/var/lib/containers/storage/overlay/993a4fbfff75c78b8f2e1174be0fbda60970cc122e17ca664da691075c8cff35/diff /var/log/pods/openstack_glance-default-internal-api-0_1299ed2d-0e46-46a5-8dd1-89a635cc4356/glance-httpd/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.958284 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.021705815s: [/var/lib/containers/storage/overlay/f8f7d1d873f927fd42af1eaa61f809eb28dcc69330165495bc87c8f4c3e0f0af/diff /var/log/pods/openshift-controller-manager_controller-manager-587464d68c-dggjn_efe44aa5-049f-4323-8df8-d08d3456d2fd/controller-manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.981977 4739 reflector.go:484] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.983528 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.046424228s: [/var/lib/containers/storage/overlay/67096baa3b528a21bd50f59da522e8a6a6eb675929619947f70c250e88e63c65/diff /var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.025027 4739 trace.go:236] Trace[1292234481]: "iptables ChainExists" (21-Jan-2026 16:39:31.963) (total time: 35061ms): Jan 21 16:40:07 crc kubenswrapper[4739]: Trace[1292234481]: [35.061059507s] [35.061059507s] END Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.100620 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.130012573s: [/var/lib/containers/storage/overlay/bfdb2a88c395a92e2aeee03d1958897afb6ecde8b4fb0dd767ece6a5962fc09c/diff /var/log/pods/openstack_horizon-97dd88d6d-7bgrq_cdecd60b-660a-4039-a35b-29fec73c85a7/horizon/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.101012 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.147284214s: [/var/lib/containers/storage/overlay/54ab7a4af8cec70c2855bcf8a6ee1c19b7b958180bccecaf30338eb88b9ef588/diff /var/log/pods/openstack_ovn-controller-metrics-5sdng_d9e43d4c-0e56-42cb-9f23-e225a7451d52/openstack-network-exporter/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.168937 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.214854064s: [/var/lib/containers/storage/overlay/09cefad2a715846a880720feeb4b72040066b5a38ff8e8e5a30af72c3b254d59/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.169000 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.215148712s: [/var/lib/containers/storage/overlay/e4428af010c5273c2963931f32bfce5c0ae92dd4e2289880d7264fd15e714947/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: W0121 16:40:07.178123 4739 reflector.go:484] object-"openstack"/"cert-kube-state-metrics-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:07 crc kubenswrapper[4739]: E0121 16:40:07.188059 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T16:38:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T16:38:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T16:38:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T16:38:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.192646 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.238965761s: [/var/lib/containers/storage/overlay/bdb7de00ca9bb34a1e32f32cca56e7c2f4d1602d1beeba65486e0533c266797e/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.193163 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.23934144s: [/var/lib/containers/storage/overlay/67af9e7666461ea035b6ec31ff4c5b6e5a50442b63d9bed9ed1119310ec8c0c6/diff /var/log/pods/openstack_glance-default-internal-api-0_1299ed2d-0e46-46a5-8dd1-89a635cc4356/glance-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.301892 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.34802527s: [/var/lib/containers/storage/overlay/d98e80d1190a81b64c8bb9ea171c38a5c3b0312545a2b070573c8cf33d1c612c/diff /var/log/pods/openshift-cluster-version_cluster-version-operator-5c965bbfc6-62c7v_b2bbaa74-fc02-4130-aec7-49b9922e6af7/cluster-version-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.348279 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="27acefc8-6355-40dc-aaa8-84029c626a0b" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.153:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.385958 4739 patch_prober.go:28] interesting pod/oauth-openshift-56c7c74f4-fqqqm container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.525381 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" podUID="e98b24b8-e20c-447e-86b1-5c4d5d0bc15a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.412708 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.414150 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.414416 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.488801 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="3e7c2005-9f9a-41b3-b7c0-7dc430637ba8" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.239:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.488892 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="7353ecec-24ef-48a5-9046-95c8e0b77de0" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.238:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.488914 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.488927 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.488978 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.489121 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.490728 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491144 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491305 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491462 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491577 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.538388 4739 reflector.go:484] object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.538426 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492114 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" podUID="4ec8cb71-79f4-4c17-9519-94a7d2f5d25a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.70:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492277 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-nq75j" podUID="9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.49:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492357 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492480 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" podUID="df4966b4-eef0-46d7-a70b-f7108da36b36" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492763 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c6c95c866-nplmh" podUID="08457213-f4e0-4334-a1b0-a569bb5077ba" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.150:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492213 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505113 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="52afdd4f-bb93-4cc6-b074-7391852099ee" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.181:6080/vnc_lite.html\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505053 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c6c95c866-nplmh" podUID="08457213-f4e0-4334-a1b0-a569bb5077ba" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.150:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505425 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="7a559158-ae1f-4b55-bf71-90061b51b807" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.164:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505751 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505881 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506388 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="82cfddd4-081e-4b33-82e2-5dbd44a11e56" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.248:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506575 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-external-api-0" podUID="82cfddd4-081e-4b33-82e2-5dbd44a11e56" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.248:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506598 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="82cfddd4-081e-4b33-82e2-5dbd44a11e56" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.248:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: E0121 16:40:07.490510 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events\": http2: client connection lost" event="&Event{ObjectMeta:{ceilometer-0.188ccc7890978040 openstack 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:ceilometer-0,UID:f2fec0ae-aaf7-434d-b425-7b3321505810,APIVersion:v1,ResourceVersion:67693,FieldPath:spec.containers{ceilometer-central-agent},},Reason:Unhealthy,Message:Liveness probe failed: command timed out,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 16:39:31.843752 +0000 UTC m=+4403.534458284,LastTimestamp:2026-01-21 16:39:31.843752 +0000 UTC m=+4403.534458284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.518750 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7c6c95c866-nplmh" podUID="08457213-f4e0-4334-a1b0-a569bb5077ba" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.150:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.518727 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7c6c95c866-nplmh" podUID="08457213-f4e0-4334-a1b0-a569bb5077ba" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.150:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.518767 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.519264 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.519332 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="7a559158-ae1f-4b55-bf71-90061b51b807" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.164:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.545704 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.546015 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.413252 4739 patch_prober.go:28] interesting pod/controller-manager-587464d68c-dggjn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.558925 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" podUID="efe44aa5-049f-4323-8df8-d08d3456d2fd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.413889 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491654 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" podUID="84c56862-84f8-419f-af8d-69c644199e10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.413530 4739 patch_prober.go:28] interesting pod/controller-manager-587464d68c-dggjn container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.559335 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" podUID="efe44aa5-049f-4323-8df8-d08d3456d2fd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.414716 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.559427 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.468427 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468364 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468493 4739 reflector.go:484] object-"openstack-operators"/"openstack-operator-index-dockercfg-2bxlr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468525 4739 reflector.go:484] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468594 4739 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468614 4739 reflector.go:484] object-"openshift-authentication-operator"/"service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468673 4739 reflector.go:484] object-"hostpath-provisioner"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.472608 4739 reflector.go:484] object-"openshift-console"/"console-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.472908 4739 reflector.go:484] object-"openshift-authentication-operator"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.473342 4739 reflector.go:484] object-"openstack"/"cert-placement-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.474876 4739 reflector.go:484] object-"openstack"/"ovnnorthd-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.475218 4739 reflector.go:484] object-"openstack"/"openstack-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.475497 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vbc8p" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.577400 4739 reflector.go:484] object-"metallb-system"/"speaker-dockercfg-kpgsq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.577463 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.475563 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-rjqnz" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.475613 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584406 4739 reflector.go:484] object-"openstack"/"ovndbcluster-sb-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584454 4739 reflector.go:484] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c886n": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584499 4739 reflector.go:484] object-"openstack"/"manila-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584531 4739 reflector.go:484] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-57np9": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584549 4739 reflector.go:484] object-"openshift-dns-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584595 4739 reflector.go:484] object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ql784": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584695 4739 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.584956 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.587743 4739 reflector.go:484] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-72bbh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.475661 4739 reflector.go:484] object-"openstack"/"rabbitmq-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.475683 4739 reflector.go:484] object-"openstack"/"cinder-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.475713 4739 reflector.go:484] object-"metallb-system"/"speaker-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.476000 4739 reflector.go:484] object-"openstack"/"ceilometer-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477020 4739 reflector.go:484] object-"openstack"/"cert-barbican-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477134 4739 reflector.go:484] object-"openstack"/"cinder-cinder-dockercfg-4sncj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477190 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477333 4739 reflector.go:484] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477388 4739 reflector.go:484] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477540 4739 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477588 4739 reflector.go:484] object-"openshift-image-registry"/"image-registry-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477728 4739 reflector.go:484] object-"openstack"/"test-operator-controller-priv-key": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477780 4739 reflector.go:484] object-"openstack"/"ovndbcluster-nb-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477960 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-server-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477997 4739 reflector.go:484] object-"openstack"/"manila-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.488258 4739 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.488328 4739 reflector.go:484] object-"openstack"/"rabbitmq-server-dockercfg-46fx7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491804 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g47s4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.589716 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" podUID="93e52f9b-f4a8-41b8-ba57-2dbbe554661f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492001 4739 patch_prober.go:28] interesting pod/route-controller-manager-7db54bc9d4-7l9zx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.590888 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" podUID="01cc83e2-7bed-4429-8a77-390e56bbf855" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492566 4739 patch_prober.go:28] interesting pod/route-controller-manager-7db54bc9d4-7l9zx container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.591177 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" podUID="01cc83e2-7bed-4429-8a77-390e56bbf855" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492658 4739 patch_prober.go:28] interesting pod/dns-default-xg9nx container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.35:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.591325 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-xg9nx" podUID="61310358-52da-4a4b-bcfd-4f68340d64c3" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.35:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.493210 4739 reflector.go:484] object-"openshift-machine-api"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.493449 4739 reflector.go:484] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2hwch": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.493458 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.497400 4739 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.498492 4739 reflector.go:484] object-"openstack"/"memcached-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505261 4739 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-t5799 container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592145 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" podUID="ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.505512 4739 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505647 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.505736 4739 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505950 4739 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-ql4qj container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592365 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" podUID="e7cd1565-a272-48a7-bc63-b61518f16400" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506072 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592437 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506134 4739 patch_prober.go:28] interesting pod/network-check-target-xd92c container/network-check-target-container namespace/openshift-network-diagnostics: Readiness probe status=failure output="Get \"http://10.217.0.4:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592508 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" containerName="network-check-target-container" probeResult="failure" output="Get \"http://10.217.0.4:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506358 4739 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jbgcq container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592574 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" podUID="079963dd-bb7d-472a-8af1-0f5386c5f32b" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.512171 4739 reflector.go:484] object-"openstack"/"rabbitmq-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.513637 4739 reflector.go:484] object-"openstack"/"cinder-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.513688 4739 reflector.go:484] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.513952 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.519312 4739 reflector.go:484] object-"openstack"/"cert-nova-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.519347 4739 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-t5799 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592706 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" podUID="ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.521644 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.525337 4739 reflector.go:484] object-"openshift-controller-manager"/"client-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649037 4739 reflector.go:484] object-"openshift-service-ca-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649105 4739 reflector.go:484] object-"openshift-ingress-canary"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649158 4739 reflector.go:484] object-"openshift-nmstate"/"nmstate-handler-dockercfg-9v5f6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649191 4739 reflector.go:484] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2hs44": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649220 4739 reflector.go:484] object-"openstack"/"nova-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649240 4739 reflector.go:484] object-"openstack-operators"/"metrics-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.656981 4739 reflector.go:484] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.658313 4739 reflector.go:484] object-"openstack"/"kube-state-metrics-tls-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.659002 4739 reflector.go:484] object-"openshift-console"/"console-dockercfg-f62pw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.659033 4739 reflector.go:484] object-"openshift-service-ca"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.659055 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663128 4739 reflector.go:484] object-"openshift-console-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663185 4739 reflector.go:484] object-"openstack"/"nova-metadata-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663212 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663234 4739 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663261 4739 reflector.go:484] object-"openstack"/"openstack-cell1-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663293 4739 reflector.go:484] object-"openshift-console"/"oauth-serving-cert": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663320 4739 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663349 4739 reflector.go:484] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663374 4739 reflector.go:484] object-"openshift-network-console"/"networking-console-plugin": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663401 4739 reflector.go:484] object-"openstack"/"openstack-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663431 4739 reflector.go:484] object-"openstack"/"cert-neutron-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663456 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-router-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663638 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663866 4739 reflector.go:484] object-"openstack"/"barbican-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664044 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664073 4739 reflector.go:484] object-"metallb-system"/"metallb-excludel2": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664103 4739 reflector.go:484] object-"openshift-multus"/"multus-ac-dockercfg-9lkdf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664150 4739 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664175 4739 reflector.go:484] object-"openstack"/"placement-placement-dockercfg-zgf5q": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664190 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664214 4739 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664235 4739 reflector.go:484] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-46j5c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664258 4739 reflector.go:484] object-"openstack"/"telemetry-ceilometer-dockercfg-65xmb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664283 4739 reflector.go:484] object-"openstack"/"glance-glance-dockercfg-lc9pg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664411 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664722 4739 reflector.go:484] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664753 4739 reflector.go:484] object-"openshift-ingress-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664774 4739 reflector.go:484] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664842 4739 reflector.go:484] object-"openshift-dns"/"dns-default-metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664947 4739 reflector.go:484] object-"openshift-service-ca-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: E0121 16:40:07.665034 4739 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-barbican-dockercfg-bcvzr\": Failed to watch *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-bcvzr&resourceVersion=74056&timeout=43m25s&timeoutSeconds=2605&watch=true\": http2: client connection lost" logger="UnhandledError" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.665135 4739 reflector.go:484] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.671036 4739 reflector.go:484] object-"openstack"/"cert-galera-openstack-cell1-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.671433 4739 reflector.go:484] object-"openshift-route-controller-manager"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.674054 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.674102 4739 reflector.go:484] object-"openstack"/"cinder-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.674206 4739 reflector.go:484] object-"openstack"/"barbican-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.674934 4739 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.675222 4739 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.675251 4739 reflector.go:484] object-"openstack"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.675279 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.675304 4739 reflector.go:484] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.675355 4739 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.685859 4739 reflector.go:484] object-"openstack"/"rabbitmq-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.698314 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.698574 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.698884 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699607 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699671 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699701 4739 reflector.go:484] object-"openshift-etcd-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699729 4739 reflector.go:484] object-"openstack"/"cert-ovncontroller-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699759 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699774 4739 reflector.go:484] object-"openshift-console-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699788 4739 reflector.go:484] object-"openshift-network-console"/"networking-console-plugin-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699847 4739 reflector.go:484] object-"metallb-system"/"controller-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699873 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699894 4739 reflector.go:484] object-"openshift-ingress-canary"/"canary-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699919 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699942 4739 reflector.go:484] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699965 4739 reflector.go:484] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8m9mj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699982 4739 reflector.go:484] object-"openstack"/"keystone-keystone-dockercfg-p8xc6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700002 4739 reflector.go:484] object-"openshift-nmstate"/"openshift-nmstate-webhook": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700021 4739 reflector.go:484] object-"openshift-image-registry"/"image-registry-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: E0121 16:40:07.700079 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=74040&timeout=9m27s&timeoutSeconds=567&watch=true\": http2: client connection lost" logger="UnhandledError" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700105 4739 reflector.go:484] object-"openstack"/"ceph-conf-files": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700129 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700154 4739 reflector.go:484] object-"openstack"/"rabbitmq-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700178 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700199 4739 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700220 4739 reflector.go:484] object-"openstack-operators"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700243 4739 reflector.go:484] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6jsp6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700264 4739 reflector.go:484] object-"openstack"/"galera-openstack-cell1-dockercfg-d2kzn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700278 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700303 4739 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700327 4739 reflector.go:484] object-"openshift-nmstate"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700348 4739 reflector.go:484] object-"openshift-authentication"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700368 4739 reflector.go:484] object-"openstack"/"ovsdbserver-nb": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700389 4739 reflector.go:484] object-"openshift-multus"/"metrics-daemon-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.701016 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701035 4739 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701070 4739 reflector.go:484] object-"openshift-service-ca"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701093 4739 reflector.go:484] object-"cert-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701116 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701138 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701150 4739 reflector.go:484] object-"openstack"/"cert-horizon-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701162 4739 reflector.go:484] object-"openstack"/"cert-rabbitmq-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701174 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701189 4739 reflector.go:484] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zmxsx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701210 4739 reflector.go:484] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663397 4739 reflector.go:484] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mlp5s": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.707032 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.711967 4739 reflector.go:484] object-"openstack"/"ovndbcluster-sb-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.718989 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.720368 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.720566 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.720729 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721066 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721250 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721502 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721648 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721790 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721976 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.722125 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.722329 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.722557 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.722720 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.722932 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z2cw7" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.731275 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.731749 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.739363 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-z95dr" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.745533 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749257 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-l9kt6" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749423 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749713 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749837 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749943 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749955 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750031 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750095 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750201 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750208 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750275 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750330 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750345 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750500 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750683 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.751095 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755165 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755328 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755406 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755492 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755606 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755679 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755745 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755842 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755915 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755980 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.756061 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.756126 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.756201 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.758242 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.758636 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.759023 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-97dd88d6d-7bgrq" podUID="cdecd60b-660a-4039-a35b-29fec73c85a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.759315 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-97dd88d6d-7bgrq" podUID="cdecd60b-660a-4039-a35b-29fec73c85a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.759544 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.759897 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-lvklm container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.759926 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" podUID="c3e32932-afd4-4e36-8b07-1c6741c86bbd" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.760318 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.760856 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.761291 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.761562 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.763552 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.763623 4739 reflector.go:484] object-"openstack"/"cert-cinder-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.765990 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.766467 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.769081 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.769219 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.771666 4739 generic.go:334] "Generic (PLEG): container finished" podID="e47f3183-b43e-4910-b383-b6b674104aee" containerID="fa4c0061b940dd7da20a79efc8e63bd544f9c5840c29e8af4c57c65a5abbc5ed" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.785414 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.785621 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.785740 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.785965 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zwxcg" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.786446 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.786582 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nhqx4" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.786734 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.786802 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.788443 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.788583 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.788717 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.788968 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789137 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lfw7x" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789156 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789215 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789240 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sd482" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789364 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789523 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789548 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789708 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789866 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wk8pg" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.790003 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.790158 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nm8tb" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.802158 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.813460 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.813675 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.813990 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.814099 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.814283 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zqdld" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.814553 4739 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.815016 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.815627 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": dial tcp 10.217.0.75:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.833569 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.837747 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.841180 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.841409 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.841452 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.842804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.842961 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.843152 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844151 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844441 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844570 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844625 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844688 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844905 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/healthz\": dial tcp 10.217.0.72:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.845063 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": dial tcp 10.217.0.85:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.845637 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.845807 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.846544 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.849284 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.849621 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.849725 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.849840 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.849940 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850039 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850296 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850408 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-g7lpv" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850500 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850605 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qvcx2" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850799 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857099 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": dial tcp 10.217.0.60:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857226 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": dial tcp 10.217.0.73:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857284 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": dial tcp 10.217.0.71:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857336 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": dial tcp 10.217.0.71:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857388 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": dial tcp 10.217.0.88:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857521 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": dial tcp 10.217.0.79:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857574 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857595 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857633 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857677 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.858792 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t985g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.858864 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" podUID="f99aadf5-6fdc-42b5-937c-4792f24882ce" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.860346 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podUID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.860741 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podUID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.864086 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podUID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.864325 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.864495 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.864558 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.866586 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.866796 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.867076 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.872282 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.873122 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.873305 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.878289 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.878552 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.878995 4739 patch_prober.go:28] interesting pod/etcd-operator-b45778765-qqgkc container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.879046 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" podUID="348f800b-2552-4315-9b58-a679d8d8b6f3" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.879314 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podUID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.880099 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.880452 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.883992 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.891770 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.912677 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.923514 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.924535 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" podUID="84c56862-84f8-419f-af8d-69c644199e10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": dial tcp 10.217.0.46:8080: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.928875 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.947352 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l69gm" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.954768 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podUID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.955096 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podUID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.978627 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.978685 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.979685 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" podUID="ef7118ff-ea20-40ec-aa4d-5711926f4b6c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.980135 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.981146 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.981474 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.983183 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.983228 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.983239 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/healthz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.983141 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.999504 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-c8ppn" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.999717 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.015165 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.032245 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.032439 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4cfnm" podUID="de79a4b1-6301-4c43-ae80-14834d2d7b54" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.038336 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.038417 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": readLoopPeekFailLocked: read tcp 10.217.0.2:51606->10.217.0.83:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.038454 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.050070 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j9qnr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.050153 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" podUID="114b5947-30d6-4a6b-a1c6-1b1f75888037" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.069725 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.069890 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": read tcp 10.217.0.2:58180->10.217.0.82:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.071996 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": read tcp 10.217.0.2:60338->10.217.0.54:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082022 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": read tcp 10.217.0.2:51504->10.217.0.90:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082110 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": read tcp 10.217.0.2:43536->10.217.0.77:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082142 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": read tcp 10.217.0.2:43522->10.217.0.77:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082547 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": read tcp 10.217.0.2:51496->10.217.0.90:8081: read: connection reset by peer (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082792 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": read tcp 10.217.0.2:51900->10.217.0.76:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082860 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": read tcp 10.217.0.2:51884->10.217.0.76:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083257 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083301 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083325 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083353 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083380 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podUID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083405 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083433 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": dial tcp 10.217.0.71:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083460 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": dial tcp 10.217.0.71:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083490 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083512 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083534 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.084100 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podUID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.084155 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": read tcp 10.217.0.2:58548->10.217.0.81:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.086055 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": read tcp 10.217.0.2:51596->10.217.0.83:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.086135 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": read tcp 10.217.0.2:34572->10.217.0.87:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.086176 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": read tcp 10.217.0.2:34568->10.217.0.87:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.087336 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.087379 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.106963 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.113665 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.113773 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.114163 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.114415 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.161437 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.161658 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.161904 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.206019 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.216804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.241111 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-l9w2m" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.250851 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.273811 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.305233 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.321910 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.331200 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.349072 4739 trace.go:236] Trace[865516943]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-t5799" (21-Jan-2026 16:40:06.967) (total time: 1381ms): Jan 21 16:40:08 crc kubenswrapper[4739]: Trace[865516943]: [1.381857515s] [1.381857515s] END Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.354405 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.377623 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.407438 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.415589 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.430290 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.459257 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.473286 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.513098 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.513485 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.532011 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.550356 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.581985 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.590983 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.616197 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.627342 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2ngl6" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.657426 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-q8zfr" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.690720 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.696223 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.721374 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.727116 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.794511 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.794770 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.794965 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.820401 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.840471 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hxngv" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.856027 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.871524 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.887369 4739 generic.go:334] "Generic (PLEG): container finished" podID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerID="a14c631b2eddcd6a4e35981fa0101b812cd33baa1b1a1d3515bdd7ce8e25bcc6" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.890029 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.903679 4739 generic.go:334] "Generic (PLEG): container finished" podID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerID="1744eb46c59128a839568716e29c2f180268cf0625cece36f3f0e6657f717e45" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.920157 4739 generic.go:334] "Generic (PLEG): container finished" podID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerID="0af77460ab3bd447e9e009b13b82a8953c6d75007cd6e4916bfb576563bdfcbc" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.930340 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.933941 4739 generic.go:334] "Generic (PLEG): container finished" podID="76514973-bbd4-4c59-9c31-be5df2dbc2d3" containerID="1e4caceba08dee848b3952dbc5d98dabf22dc6b04eb6f350670775e624563cb1" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.935197 4739 generic.go:334] "Generic (PLEG): container finished" podID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerID="f6707b78785f560fb1916f7629aa9a7837dbe2be9499c11f9d45ee8a02758a6f" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.936325 4739 generic.go:334] "Generic (PLEG): container finished" podID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerID="f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.940373 4739 request.go:700] Waited for 1.010552065s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-9xwj5&resourceVersion=74056 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.948978 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9xwj5" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.949445 4739 generic.go:334] "Generic (PLEG): container finished" podID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerID="689e35d979e44be8c997b71c85c8dec41de3f14d82d1466eccdd56b0126c3317" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.955341 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.973392 4739 generic.go:334] "Generic (PLEG): container finished" podID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerID="95c5538fad47f2ab7b7a96685eaed0ca8ae783523ade4630fdcb0e673d2dd0b8" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.973921 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hcwtd" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.977925 4739 generic.go:334] "Generic (PLEG): container finished" podID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerID="ff20b00af6dc8903efbe043bcf6618b0b85d91e27520c3a4a3cdfd427f9643c9" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.004187 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.015235 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.015965 4739 generic.go:334] "Generic (PLEG): container finished" podID="1a751a90-6eaf-445b-8d90-f97d65684393" containerID="5617a46fcc75deeac98787be3c17cbfee033d1278ea3f59b8669020088dd8149" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.041132 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.044516 4739 generic.go:334] "Generic (PLEG): container finished" podID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerID="501cc2bf0ab1b2fd68ba29cb7b120b825529b9982b852f8dc8b8bccabe19770e" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.055211 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xzrtm" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.060990 4739 generic.go:334] "Generic (PLEG): container finished" podID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerID="532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.074209 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.078584 4739 generic.go:334] "Generic (PLEG): container finished" podID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerID="71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.088361 4739 generic.go:334] "Generic (PLEG): container finished" podID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerID="56539faabbd3d4d4eab45e9ad3daeab93d2b7d0abf537e7ed210cb911f7fa84d" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.097403 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.099353 4739 generic.go:334] "Generic (PLEG): container finished" podID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerID="ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.112198 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.132034 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.132233 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.143345 4739 generic.go:334] "Generic (PLEG): container finished" podID="84c56862-84f8-419f-af8d-69c644199e10" containerID="81d32085a14dc8373fa03afc2e98364ac1e3a7c069e8d695285981b1da3af8d4" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.146990 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.168307 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.176389 4739 generic.go:334] "Generic (PLEG): container finished" podID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerID="b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.186506 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.207205 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.209261 4739 generic.go:334] "Generic (PLEG): container finished" podID="52d40272-2ec5-451f-9c41-339c2859d40f" containerID="d1ff82b8075d75093dcad7bd26d722398c3cbddf2b6318e861002f179b1f602e" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.232023 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.235029 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.235217 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.242041 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.250207 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": dial tcp 10.217.0.71:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.250482 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": dial tcp 10.217.0.71:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.250876 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-5hs8m" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.255215 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.263984 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2a479218e9959991e80ff06a8c115ef778b56c2adbf7d2ec94f95e72fd4e3cb4" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.272157 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.276083 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerID="e20a31684f043b8b7fe888ff80e2129976d0ecb201f2276302eb1086cd7da9be" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.287979 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/healthz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.288012 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.290420 4739 generic.go:334] "Generic (PLEG): container finished" podID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerID="b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.291375 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.297285 4739 generic.go:334] "Generic (PLEG): container finished" podID="c14851f1-903f-4792-93bf-2c147370f312" containerID="1e033baa1b8b01aa12bcf719a520f8bf692e52bf637c994ab95df80c895f137f" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: E0121 16:40:09.304618 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d3bc4f_4498_4f3f_ac28_5832348b73a9.slice/crio-conmon-b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22ce2630_c747_40f4_8f8b_62414689534b.slice/crio-conmon-d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b8f2c9e_6151_4006_922f_dabaa3a79ddd.slice/crio-501cc2bf0ab1b2fd68ba29cb7b120b825529b9982b852f8dc8b8bccabe19770e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e1c82f_0872_46ed_b8c7_f54328ee947d.slice/crio-conmon-a14c631b2eddcd6a4e35981fa0101b812cd33baa1b1a1d3515bdd7ce8e25bcc6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-conmon-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda508acc2_8e44_462f_a06a_9ae09a853f5a.slice/crio-conmon-95c5538fad47f2ab7b7a96685eaed0ca8ae783523ade4630fdcb0e673d2dd0b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-conmon-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-conmon-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f88e7d_645a_4b19_bafd_05ba8bb11914.slice/crio-f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23645bd3_1829_4740_bdb9_82e6a25d7c9c.slice/crio-conmon-ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c4ac48b_8e08_41e5_981c_a57ba6c23f52.slice/crio-conmon-e20a31684f043b8b7fe888ff80e2129976d0ecb201f2276302eb1086cd7da9be.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52d40272_2ec5_451f_9c41_339c2859d40f.slice/crio-conmon-d1ff82b8075d75093dcad7bd26d722398c3cbddf2b6318e861002f179b1f602e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d3bc4f_4498_4f3f_ac28_5832348b73a9.slice/crio-b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6be2175b_8e2d_48d5_938e_e729cb3ac784.slice/crio-conmon-0af77460ab3bd447e9e009b13b82a8953c6d75007cd6e4916bfb576563bdfcbc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod142b0baa_2c17_4e40_b473_7251e3fefddd.slice/crio-conmon-f6707b78785f560fb1916f7629aa9a7837dbe2be9499c11f9d45ee8a02758a6f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22ce2630_c747_40f4_8f8b_62414689534b.slice/crio-d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4ea78b8_c892_42e6_b39b_51d33fdac25a.slice/crio-conmon-ff20b00af6dc8903efbe043bcf6618b0b85d91e27520c3a4a3cdfd427f9643c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f88e7d_645a_4b19_bafd_05ba8bb11914.slice/crio-conmon-f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47f3183_b43e_4910_b383_b6b674104aee.slice/crio-conmon-fa4c0061b940dd7da20a79efc8e63bd544f9c5840c29e8af4c57c65a5abbc5ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23645bd3_1829_4740_bdb9_82e6a25d7c9c.slice/crio-ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.310001 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.310082 4739 generic.go:334] "Generic (PLEG): container finished" podID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerID="59f90a1e856ec85f5b9c34c45740e95e25dc66d3ce07972bf5c2823878e6c067" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.331415 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.348970 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.354990 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.359476 4739 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.377379 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.384180 4739 generic.go:334] "Generic (PLEG): container finished" podID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerID="5bb8f82c63ec28585a98b4ff49d367c63f87e79d4bd487a68847e6ccffd6fc8d" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.398010 4739 generic.go:334] "Generic (PLEG): container finished" podID="22ce2630-c747-40f4-8f8b-62414689534b" containerID="d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.418134 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.418293 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.421632 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.424255 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.436411 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.443534 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.443814 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.455403 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.474624 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.489271 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.508053 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.509293 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.509439 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.526744 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.546785 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.567660 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.588058 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.588626 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.589046 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.589108 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.589128 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.607924 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.631044 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.648373 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-t5zpb" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.667728 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.687834 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.707450 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cxqd4" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.729894 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.747499 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.754567 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f2fec0ae-aaf7-434d-b425-7b3321505810" containerName="ceilometer-central-agent" probeResult="failure" output=< Jan 21 16:40:09 crc kubenswrapper[4739]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 21 16:40:09 crc kubenswrapper[4739]: > Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.769444 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.781151 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.781258 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.787570 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-c9nsw" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.791447 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.791517 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.807483 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.818598 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="d6502a4d-1f62-4f00-8c3f-7e51b14b616a" containerName="galera" probeResult="failure" output="command timed out" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.819144 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="d9c86609-18a0-47cb-8ce3-863d829a2f65" containerName="galera" probeResult="failure" output="command timed out" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.820118 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="d6502a4d-1f62-4f00-8c3f-7e51b14b616a" containerName="galera" probeResult="failure" output="command timed out" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.824397 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.824592 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.827715 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.849285 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.870244 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.890120 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5d5ff" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.890847 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.890930 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.903207 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" podUID="84c56862-84f8-419f-af8d-69c644199e10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": dial tcp 10.217.0.46:8080: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.910018 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.921459 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.921560 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.925787 4739 request.go:700] Waited for 1.890202216s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-zrszd&resourceVersion=73830 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.929441 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zrszd" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.947662 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.959836 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.959917 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.966978 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.987280 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.006892 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.027593 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.061637 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.061716 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.062085 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.066954 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.087459 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.112142 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nsbps" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.128886 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.147497 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.167362 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.190777 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.208000 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-49v78" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.227502 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.252542 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.255801 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.255915 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.270791 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.288102 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.307993 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.327724 4739 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.347647 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.367846 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.376585 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.376681 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.388757 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.391323 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f2fec0ae-aaf7-434d-b425-7b3321505810" containerName="ceilometer-central-agent" probeResult="failure" output=< Jan 21 16:40:10 crc kubenswrapper[4739]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 21 16:40:10 crc kubenswrapper[4739]: > Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.406309 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-n2mhx" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.413299 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.413306 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.429691 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.447410 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.468804 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.487974 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.506957 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.526649 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.548891 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.566991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.586839 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.607633 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.627486 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.648482 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.656024 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.656082 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.666689 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.691726 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.706659 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.725175 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.725272 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.727262 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.759373 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.766326 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.787101 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.807294 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.846766 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.872789 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.886645 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-q2nzx" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.906854 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.926678 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.945455 4739 request.go:700] Waited for 2.73914719s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=73766 Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.947281 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.967223 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.986427 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mm7j6" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.007588 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.044123 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.047388 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6ntnw" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.065950 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.089904 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.106796 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.126560 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.147570 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.167534 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.186465 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.206756 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.226631 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.246991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c886n" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.267573 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.293218 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.307399 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.327429 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.347702 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zgf5q" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.370592 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2hs44" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.387007 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.407592 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.427460 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.447650 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.467003 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.488510 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ql784" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.507037 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.527321 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.548443 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mlp5s" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.574945 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.586955 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.606955 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.626394 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.647421 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.667335 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.686953 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.707441 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.727155 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.747661 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.767041 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.770105 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.770182 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.787287 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.807698 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.827115 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.847455 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.868295 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.887537 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kpgsq" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.906740 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.926865 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.947124 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.965292 4739 request.go:700] Waited for 3.352545782s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=73642 Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.967182 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.987342 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.007354 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.026929 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.046897 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.067607 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.087171 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.107313 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.147673 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6jsp6" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.167060 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-65xmb" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.187055 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.207038 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bcvzr" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.227571 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.247166 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.266944 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.286936 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.307805 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.326697 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.346414 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.368046 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lc9pg" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.388144 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.407546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.427519 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4sncj" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.446833 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.467513 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.487020 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zmxsx" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.507916 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.527654 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.547844 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.567065 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.573914 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podUID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.587403 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.606965 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2bxlr" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.627429 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.687062 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.706890 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.727213 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.747222 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.787305 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.807226 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.827580 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.846486 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.867487 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.887238 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.907395 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.927324 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.946913 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.965626 4739 request.go:700] Waited for 4.078367295s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=73991 Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.967026 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.987231 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.006754 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.026858 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.047175 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.067079 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.087806 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.107646 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p8xc6" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.127359 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.147512 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.167792 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.187133 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.206752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.227832 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.246963 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.267621 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.287499 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2hwch" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.307766 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.327405 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.348050 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.367047 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9v5f6" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.386499 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.407566 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.426852 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.448267 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-46fx7" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.495928 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-d2kzn" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.496076 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.506561 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-57np9" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.526604 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.546647 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.568177 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.588077 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.607469 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.627238 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.647508 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8m9mj" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.667266 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.686948 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.706537 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.727119 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.746933 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.767065 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.786853 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.807437 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.827553 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.846913 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.868285 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.887718 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.907327 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-72bbh" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.926828 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.946808 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.966732 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.985939 4739 request.go:700] Waited for 4.817613282s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=73856 Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.987750 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.008866 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.026656 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-46j5c" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.047599 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.067263 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.087544 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.106785 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.126464 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.146706 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.166357 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.187126 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.207732 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.226920 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.246856 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.267471 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.271864 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-858654f9db-qtp84" podUID="796392e6-8151-400a-b817-4b844f2ec047" containerName="cert-manager-controller" probeResult="failure" output="Get \"http://10.217.0.69:9403/livez\": dial tcp 10.217.0.69:9403: connect: connection refused" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.286920 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.306763 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.326835 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.739286 4739 request.go:700] Waited for 5.215738584s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/serviceaccounts/nova-nova/token Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.846712 4739 trace.go:236] Trace[1085616224]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 16:40:08.622) (total time: 11224ms): Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[1085616224]: ---"Objects listed" error: 11223ms (16:40:19.846) Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[1085616224]: [11.224051655s] [11.224051655s] END Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.847017 4739 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.848557 4739 trace.go:236] Trace[217615069]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 16:40:08.767) (total time: 11081ms): Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[217615069]: ---"Objects listed" error: 11081ms (16:40:19.848) Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[217615069]: [11.081226108s] [11.081226108s] END Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.848576 4739 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.861922 4739 trace.go:236] Trace[874140616]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 16:40:08.823) (total time: 11038ms): Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[874140616]: ---"Objects listed" error: 11038ms (16:40:19.861) Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[874140616]: [11.038254897s] [11.038254897s] END Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.861955 4739 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.908430 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.916200 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.922325 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.924144 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.925582 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.925680 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podUID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.964155 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.968026 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" podUID="84c56862-84f8-419f-af8d-69c644199e10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": dial tcp 10.217.0.46:8080: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.974851 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:19.988582 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:19.988656 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": dial tcp 10.217.0.71:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:19.988705 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:19.988746 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:19.990111 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.000407 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.000606 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.000705 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.001775 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.072532 4739 generic.go:334] "Generic (PLEG): container finished" podID="7a61f406-e13a-4295-a1cc-2d9a0b9197eb" containerID="72bbd2b2dbaf046a4f15fe2d094cbe54a559f9bd87086c3139e5b30513c140b8" exitCode=1 Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.137590 4739 trace.go:236] Trace[1446757559]: "Reflector ListAndWatch" name:pkg/kubelet/config/apiserver.go:66 (21-Jan-2026 16:40:08.767) (total time: 11370ms): Jan 21 16:40:20 crc kubenswrapper[4739]: Trace[1446757559]: ---"Objects listed" error: 11370ms (16:40:20.137) Jan 21 16:40:20 crc kubenswrapper[4739]: Trace[1446757559]: [11.37033983s] [11.37033983s] END Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.137626 4739 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.145237 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: E0121 16:40:20.178634 4739 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.471s" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.200767 4739 generic.go:334] "Generic (PLEG): container finished" podID="796392e6-8151-400a-b817-4b844f2ec047" containerID="7310f265fa9136bc4d1afb97ded0153b812ac9a74ebd8fff72686edfc4432ec7" exitCode=1 Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.236580 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.236799 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" event={"ID":"e47f3183-b43e-4910-b383-b6b674104aee","Type":"ContainerDied","Data":"fa4c0061b940dd7da20a79efc8e63bd544f9c5840c29e8af4c57c65a5abbc5ed"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.236936 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" event={"ID":"f6e1c82f-0872-46ed-b8c7-f54328ee947d","Type":"ContainerDied","Data":"a14c631b2eddcd6a4e35981fa0101b812cd33baa1b1a1d3515bdd7ce8e25bcc6"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237011 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" event={"ID":"80f04548-9a1c-4ad8-b6f5-0195c1def7fc","Type":"ContainerDied","Data":"1744eb46c59128a839568716e29c2f180268cf0625cece36f3f0e6657f717e45"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237135 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" event={"ID":"6be2175b-8e2d-48d5-938e-e729cb3ac784","Type":"ContainerDied","Data":"0af77460ab3bd447e9e009b13b82a8953c6d75007cd6e4916bfb576563bdfcbc"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237198 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237257 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" event={"ID":"76514973-bbd4-4c59-9c31-be5df2dbc2d3","Type":"ContainerDied","Data":"1e4caceba08dee848b3952dbc5d98dabf22dc6b04eb6f350670775e624563cb1"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237343 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237405 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237650 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.256930 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.249719 4739 scope.go:117] "RemoveContainer" containerID="689e35d979e44be8c997b71c85c8dec41de3f14d82d1466eccdd56b0126c3317" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.258171 4739 scope.go:117] "RemoveContainer" containerID="fa4c0061b940dd7da20a79efc8e63bd544f9c5840c29e8af4c57c65a5abbc5ed" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.277730 4739 scope.go:117] "RemoveContainer" containerID="1744eb46c59128a839568716e29c2f180268cf0625cece36f3f0e6657f717e45" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.278051 4739 scope.go:117] "RemoveContainer" containerID="0af77460ab3bd447e9e009b13b82a8953c6d75007cd6e4916bfb576563bdfcbc" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.296870 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.298215 4739 scope.go:117] "RemoveContainer" containerID="1e033baa1b8b01aa12bcf719a520f8bf692e52bf637c994ab95df80c895f137f" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.298581 4739 scope.go:117] "RemoveContainer" containerID="1e4caceba08dee848b3952dbc5d98dabf22dc6b04eb6f350670775e624563cb1" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.298870 4739 scope.go:117] "RemoveContainer" containerID="a14c631b2eddcd6a4e35981fa0101b812cd33baa1b1a1d3515bdd7ce8e25bcc6" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.299144 4739 scope.go:117] "RemoveContainer" containerID="7310f265fa9136bc4d1afb97ded0153b812ac9a74ebd8fff72686edfc4432ec7" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.366139 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.379971 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380248 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380325 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380396 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380494 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380578 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" event={"ID":"142b0baa-2c17-4e40-b473-7251e3fefddd","Type":"ContainerDied","Data":"f6707b78785f560fb1916f7629aa9a7837dbe2be9499c11f9d45ee8a02758a6f"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380787 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380946 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" event={"ID":"30f88e7d-645a-4b19-bafd-05ba8bb11914","Type":"ContainerDied","Data":"f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381031 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" event={"ID":"ee924d67-3bf6-48e6-b378-244e5912ccf1","Type":"ContainerDied","Data":"689e35d979e44be8c997b71c85c8dec41de3f14d82d1466eccdd56b0126c3317"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381124 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381217 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381300 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" event={"ID":"a508acc2-8e44-462f-a06a-9ae09a853f5a","Type":"ContainerDied","Data":"95c5538fad47f2ab7b7a96685eaed0ca8ae783523ade4630fdcb0e673d2dd0b8"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381412 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381503 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381578 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" event={"ID":"b4ea78b8-c892-42e6-b39b-51d33fdac25a","Type":"ContainerDied","Data":"ff20b00af6dc8903efbe043bcf6618b0b85d91e27520c3a4a3cdfd427f9643c9"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381656 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381729 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381796 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" event={"ID":"1a751a90-6eaf-445b-8d90-f97d65684393","Type":"ContainerDied","Data":"5617a46fcc75deeac98787be3c17cbfee033d1278ea3f59b8669020088dd8149"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381891 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" event={"ID":"8b8f2c9e-6151-4006-922f-dabaa3a79ddd","Type":"ContainerDied","Data":"501cc2bf0ab1b2fd68ba29cb7b120b825529b9982b852f8dc8b8bccabe19770e"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381980 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" event={"ID":"031e8a3d-8560-4f90-a4ee-9303509dc643","Type":"ContainerDied","Data":"532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382081 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382154 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" event={"ID":"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc","Type":"ContainerDied","Data":"71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382233 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" event={"ID":"d42979af-89f0-4c90-9764-a1bbc4429b2b","Type":"ContainerDied","Data":"56539faabbd3d4d4eab45e9ad3daeab93d2b7d0abf537e7ed210cb911f7fa84d"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382316 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382384 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382458 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382552 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382749 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382797 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382812 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" event={"ID":"23645bd3-1829-4740-bdb9-82e6a25d7c9c","Type":"ContainerDied","Data":"ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382853 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" event={"ID":"84c56862-84f8-419f-af8d-69c644199e10","Type":"ContainerDied","Data":"81d32085a14dc8373fa03afc2e98364ac1e3a7c069e8d695285981b1da3af8d4"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382870 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382899 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382912 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382925 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382935 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382948 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382957 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382969 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382979 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382990 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" event={"ID":"83d3bc4f-4498-4f3f-ac28-5832348b73a9","Type":"ContainerDied","Data":"b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383003 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" event={"ID":"52d40272-2ec5-451f-9c41-339c2859d40f","Type":"ContainerDied","Data":"d1ff82b8075d75093dcad7bd26d722398c3cbddf2b6318e861002f179b1f602e"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383030 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383041 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2a479218e9959991e80ff06a8c115ef778b56c2adbf7d2ec94f95e72fd4e3cb4"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383059 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383071 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" event={"ID":"2c4ac48b-8e08-41e5-981c-a57ba6c23f52","Type":"ContainerDied","Data":"e20a31684f043b8b7fe888ff80e2129976d0ecb201f2276302eb1086cd7da9be"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383100 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383114 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383123 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383133 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" event={"ID":"5dcd510c-acad-453b-9777-dfaa2513eef8","Type":"ContainerDied","Data":"b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383150 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383164 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" event={"ID":"c14851f1-903f-4792-93bf-2c147370f312","Type":"ContainerDied","Data":"1e033baa1b8b01aa12bcf719a520f8bf692e52bf637c994ab95df80c895f137f"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" event={"ID":"4c4bf693-865f-4d6d-ba43-d37a43a2faa0","Type":"ContainerDied","Data":"59f90a1e856ec85f5b9c34c45740e95e25dc66d3ce07972bf5c2823878e6c067"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383196 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383222 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" event={"ID":"ef6032ac-99cd-4ac4-899b-74a9e3b53949","Type":"ContainerDied","Data":"5bb8f82c63ec28585a98b4ff49d367c63f87e79d4bd487a68847e6ccffd6fc8d"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383236 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" event={"ID":"22ce2630-c747-40f4-8f8b-62414689534b","Type":"ContainerDied","Data":"d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383266 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383282 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" event={"ID":"7a61f406-e13a-4295-a1cc-2d9a0b9197eb","Type":"ContainerDied","Data":"72bbd2b2dbaf046a4f15fe2d094cbe54a559f9bd87086c3139e5b30513c140b8"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383297 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qtp84" event={"ID":"796392e6-8151-400a-b817-4b844f2ec047","Type":"ContainerDied","Data":"7310f265fa9136bc4d1afb97ded0153b812ac9a74ebd8fff72686edfc4432ec7"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383532 4739 scope.go:117] "RemoveContainer" containerID="c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.400669 4739 scope.go:117] "RemoveContainer" containerID="71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.416249 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"53eb7d2ca4bf2fefedf895ea605de95eada7673c834fe978db27d5fcf406b002"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.416380 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2fec0ae-aaf7-434d-b425-7b3321505810" containerName="ceilometer-central-agent" containerID="cri-o://53eb7d2ca4bf2fefedf895ea605de95eada7673c834fe978db27d5fcf406b002" gracePeriod=30 Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.416581 4739 scope.go:117] "RemoveContainer" containerID="71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.417643 4739 scope.go:117] "RemoveContainer" containerID="b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.417768 4739 scope.go:117] "RemoveContainer" containerID="95c5538fad47f2ab7b7a96685eaed0ca8ae783523ade4630fdcb0e673d2dd0b8" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.418421 4739 scope.go:117] "RemoveContainer" containerID="d1ff82b8075d75093dcad7bd26d722398c3cbddf2b6318e861002f179b1f602e" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.421372 4739 scope.go:117] "RemoveContainer" containerID="ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.423875 4739 scope.go:117] "RemoveContainer" containerID="b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.440781 4739 scope.go:117] "RemoveContainer" containerID="f6707b78785f560fb1916f7629aa9a7837dbe2be9499c11f9d45ee8a02758a6f" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.441024 4739 scope.go:117] "RemoveContainer" containerID="ff20b00af6dc8903efbe043bcf6618b0b85d91e27520c3a4a3cdfd427f9643c9" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.441179 4739 scope.go:117] "RemoveContainer" containerID="72bbd2b2dbaf046a4f15fe2d094cbe54a559f9bd87086c3139e5b30513c140b8" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.441338 4739 scope.go:117] "RemoveContainer" containerID="e20a31684f043b8b7fe888ff80e2129976d0ecb201f2276302eb1086cd7da9be" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.441469 4739 scope.go:117] "RemoveContainer" containerID="d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450058 4739 scope.go:117] "RemoveContainer" containerID="5bb8f82c63ec28585a98b4ff49d367c63f87e79d4bd487a68847e6ccffd6fc8d" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450390 4739 scope.go:117] "RemoveContainer" containerID="5617a46fcc75deeac98787be3c17cbfee033d1278ea3f59b8669020088dd8149" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450533 4739 scope.go:117] "RemoveContainer" containerID="f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450672 4739 scope.go:117] "RemoveContainer" containerID="59f90a1e856ec85f5b9c34c45740e95e25dc66d3ce07972bf5c2823878e6c067" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450803 4739 scope.go:117] "RemoveContainer" containerID="81d32085a14dc8373fa03afc2e98364ac1e3a7c069e8d695285981b1da3af8d4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450958 4739 scope.go:117] "RemoveContainer" containerID="56539faabbd3d4d4eab45e9ad3daeab93d2b7d0abf537e7ed210cb911f7fa84d" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.451095 4739 scope.go:117] "RemoveContainer" containerID="2a479218e9959991e80ff06a8c115ef778b56c2adbf7d2ec94f95e72fd4e3cb4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.456424 4739 scope.go:117] "RemoveContainer" containerID="501cc2bf0ab1b2fd68ba29cb7b120b825529b9982b852f8dc8b8bccabe19770e" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.459296 4739 scope.go:117] "RemoveContainer" containerID="532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3" Jan 21 16:40:21 crc kubenswrapper[4739]: E0121 16:40:21.094533 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:40:21 crc kubenswrapper[4739]: I0121 16:40:21.248257 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Jan 21 16:40:21 crc kubenswrapper[4739]: I0121 16:40:21.820511 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="d9c86609-18a0-47cb-8ce3-863d829a2f65" containerName="galera" probeResult="failure" output="command timed out" Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.281834 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qtp84" event={"ID":"796392e6-8151-400a-b817-4b844f2ec047","Type":"ContainerStarted","Data":"b3ff157470c1131b3a8a215b0383a332a27fe190ec430dc498955a9e2b467aa2"} Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.335599 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" event={"ID":"ee924d67-3bf6-48e6-b378-244e5912ccf1","Type":"ContainerStarted","Data":"1164c2ebbe890b7de8511c7176869dd68dbe06e85fdff5664ec49ad83a2e16c0"} Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.336171 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.400613 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" event={"ID":"f6e1c82f-0872-46ed-b8c7-f54328ee947d","Type":"ContainerStarted","Data":"c925d0a18125b1bd0bed5c3cc64de9f679f19e5be8c60710ce66cfbb6cd8ed9b"} Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.401263 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.478704 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" event={"ID":"6be2175b-8e2d-48d5-938e-e729cb3ac784","Type":"ContainerStarted","Data":"3d1d8a31016d0a83324af866fc9da875349fdfc66c095fcd4fbd4918d774c5e5"} Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.480233 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.574347 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.509301 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" event={"ID":"142b0baa-2c17-4e40-b473-7251e3fefddd","Type":"ContainerStarted","Data":"10d91c97f0f477ef9b1892a715b1f6e146a91d9180f77a2e934350d2646b0767"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.524272 4739 generic.go:334] "Generic (PLEG): container finished" podID="f2fec0ae-aaf7-434d-b425-7b3321505810" containerID="53eb7d2ca4bf2fefedf895ea605de95eada7673c834fe978db27d5fcf406b002" exitCode=0 Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.524540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerDied","Data":"53eb7d2ca4bf2fefedf895ea605de95eada7673c834fe978db27d5fcf406b002"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.543952 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" event={"ID":"a508acc2-8e44-462f-a06a-9ae09a853f5a","Type":"ContainerStarted","Data":"7809799f5fd5dfb716733e688e8dab090a32c9949251a5c48113c7212959a2c0"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.544080 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.551522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" event={"ID":"d42979af-89f0-4c90-9764-a1bbc4429b2b","Type":"ContainerStarted","Data":"254e9a7bb9117b5a9e0bbda24dcbf64c1c99130825e3d456ab9a038a3c2e6ffd"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.551950 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.561338 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" event={"ID":"8b8f2c9e-6151-4006-922f-dabaa3a79ddd","Type":"ContainerStarted","Data":"4ce95f7f77a81b333eb210a028dcad3501d855a929792d244c263782e44433e5"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.561490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.569928 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" event={"ID":"c14851f1-903f-4792-93bf-2c147370f312","Type":"ContainerStarted","Data":"94ea3ca7b1d5c312e63d169964e0a0f778c3cf79014f0606d256285e4c64af7e"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.570156 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.581758 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" event={"ID":"76514973-bbd4-4c59-9c31-be5df2dbc2d3","Type":"ContainerStarted","Data":"c6c4b2cbb7338d31700d52e0368be2e51bbaebb0702a39c71e66e00db3142c72"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.584602 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" event={"ID":"52d40272-2ec5-451f-9c41-339c2859d40f","Type":"ContainerStarted","Data":"29b29dc9088264d688ceccd9de2e29e62dd99fdf556f38a9faed3aa256050010"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.584914 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.588051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" event={"ID":"e47f3183-b43e-4910-b383-b6b674104aee","Type":"ContainerStarted","Data":"8dfcec1188675617e0cdfbe9790bb775b514167fdb2fd3d25fce29e39ae432b2"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.588229 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.591304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" event={"ID":"22ce2630-c747-40f4-8f8b-62414689534b","Type":"ContainerStarted","Data":"76e197a5700258c0e8611560f0b08fa245b8837b11f3cd29cb99f5532caa4cf9"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.592242 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.609038 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" event={"ID":"5dcd510c-acad-453b-9777-dfaa2513eef8","Type":"ContainerStarted","Data":"6f7919b995a3a28b96baa4a1083eb614768872e6e35496c4c3abe9de7a479808"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.619416 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" event={"ID":"1a751a90-6eaf-445b-8d90-f97d65684393","Type":"ContainerStarted","Data":"6327066b34fee90b1621ffc35cd373d841e7628d9bcc86a22e3873f3af7d3e06"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.621694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" event={"ID":"30f88e7d-645a-4b19-bafd-05ba8bb11914","Type":"ContainerStarted","Data":"832ae06313483d70c127f7967486b8920186528f61b53d90a277849e4d44958c"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.621769 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.622302 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.624284 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.625576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"73ff212c32653f0aa16185b10acc719939f1c7c687debd903372db1f0acdfd77"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.630337 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" event={"ID":"7a61f406-e13a-4295-a1cc-2d9a0b9197eb","Type":"ContainerStarted","Data":"2a2ae5674992de508def7f902d5b635a34cae944642a0807177e4aecc66ea374"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.641629 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.641660 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" event={"ID":"80f04548-9a1c-4ad8-b6f5-0195c1def7fc","Type":"ContainerStarted","Data":"a24d209121ea8ddcc9352e532aae92e5871a81e643a1bf294d0bd58dcf59288e"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.641679 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" event={"ID":"b4ea78b8-c892-42e6-b39b-51d33fdac25a","Type":"ContainerStarted","Data":"fea07ef1c3887ef07b2e88795976b822ca70cac9856d05f3bdbfdcae8f0ffd94"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.641714 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.641727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" event={"ID":"23645bd3-1829-4740-bdb9-82e6a25d7c9c","Type":"ContainerStarted","Data":"b781304e19a11cd79a8f691fe85c5856ffd372a462dfab4272251c07d97e163d"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.646502 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.666707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" event={"ID":"ef6032ac-99cd-4ac4-899b-74a9e3b53949","Type":"ContainerStarted","Data":"3fe2836fc95d7179b204ceaa1031241d9b3a8bc9487df876dd5c1934aa5c4b43"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.667894 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.671672 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" event={"ID":"4c4bf693-865f-4d6d-ba43-d37a43a2faa0","Type":"ContainerStarted","Data":"cb24bd0c46a93214cf0d83adfb03a866e6597cff0d8754bbfba454175cb169b4"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.671965 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.678292 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.678647 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2ae465dab007450bd7b17bfd685889aa66bef0a9b4b17c01c7ce12217f68ddc2"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.686091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" event={"ID":"83d3bc4f-4498-4f3f-ac28-5832348b73a9","Type":"ContainerStarted","Data":"3e59a8e813a6ef848112840021a16a1816e19dc6d8aa5a22052645c8cb3f8713"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.689538 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.701030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" event={"ID":"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc","Type":"ContainerStarted","Data":"9bc2c472a0f2947185d7bb5729daaf416e96d02937107614443d231b99dea95e"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.702345 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.717901 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" event={"ID":"2c4ac48b-8e08-41e5-981c-a57ba6c23f52","Type":"ContainerStarted","Data":"2003e3ed868ee89696270eba68a9de5f04e077e75d244002d4f69f79eeca43a7"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.718918 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.744358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" event={"ID":"031e8a3d-8560-4f90-a4ee-9303509dc643","Type":"ContainerStarted","Data":"37e3bae84a8891feefd5416399434c4d10f41a08e04e1e3b17573676dfdc326e"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.745147 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.773853 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" event={"ID":"84c56862-84f8-419f-af8d-69c644199e10","Type":"ContainerStarted","Data":"368f01a5d468ccee000fd5c8f83d6f3919d6459025d438e5b97fa1579a52c042"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.774296 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.776974 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.777429 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 16:40:25 crc kubenswrapper[4739]: I0121 16:40:25.788167 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"534b703c3028e0d61640547fd274451de79eb368266dad4a8f45d474c99affd8"} Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.673697 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.678174 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.804185 4739 generic.go:334] "Generic (PLEG): container finished" podID="f61fadad-2760-4a0f-8f1c-58598416d39a" containerID="54b31c4ebe8c3e0f611be93e99f517b3828525988611a928ea5c54cae1960aab" exitCode=0 Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.804272 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" event={"ID":"f61fadad-2760-4a0f-8f1c-58598416d39a","Type":"ContainerDied","Data":"54b31c4ebe8c3e0f611be93e99f517b3828525988611a928ea5c54cae1960aab"} Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.804466 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.805120 4739 scope.go:117] "RemoveContainer" containerID="54b31c4ebe8c3e0f611be93e99f517b3828525988611a928ea5c54cae1960aab" Jan 21 16:40:28 crc kubenswrapper[4739]: I0121 16:40:28.816060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" event={"ID":"f61fadad-2760-4a0f-8f1c-58598416d39a","Type":"ContainerStarted","Data":"be44b517505a5d17d2adc1e3019ffc5a22c7468246691d184921eb966e45888d"} Jan 21 16:40:28 crc kubenswrapper[4739]: I0121 16:40:28.817493 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 16:40:28 crc kubenswrapper[4739]: I0121 16:40:28.817585 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28ff6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Jan 21 16:40:28 crc kubenswrapper[4739]: I0121 16:40:28.817627 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" podUID="f61fadad-2760-4a0f-8f1c-58598416d39a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.126193 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.238297 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.252758 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.292095 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.408707 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.443396 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.510937 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.589693 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.781993 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.793376 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.826676 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.831544 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.891027 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.922748 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.960784 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.062308 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.257346 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.375406 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.408667 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.664529 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.738584 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 16:40:31 crc kubenswrapper[4739]: E0121 16:40:31.394108 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:40:32 crc kubenswrapper[4739]: I0121 16:40:32.578629 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.176021 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.222886 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.222938 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.222979 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.223736 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.223798 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" gracePeriod=600 Jan 21 16:40:35 crc kubenswrapper[4739]: E0121 16:40:35.342671 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.922563 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" exitCode=0 Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.922617 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698"} Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.922656 4739 scope.go:117] "RemoveContainer" containerID="d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.923268 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:40:35 crc kubenswrapper[4739]: E0121 16:40:35.923642 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.930566 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 16:40:41 crc kubenswrapper[4739]: E0121 16:40:41.620944 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:40:41 crc kubenswrapper[4739]: I0121 16:40:41.773794 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:50 crc kubenswrapper[4739]: I0121 16:40:50.782980 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:40:50 crc kubenswrapper[4739]: E0121 16:40:50.783900 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:40:51 crc kubenswrapper[4739]: E0121 16:40:51.881433 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:40:59 crc kubenswrapper[4739]: I0121 16:40:59.903788 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 16:41:02 crc kubenswrapper[4739]: E0121 16:41:02.157074 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:41:03 crc kubenswrapper[4739]: I0121 16:41:03.782763 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:41:03 crc kubenswrapper[4739]: E0121 16:41:03.783608 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:41:14 crc kubenswrapper[4739]: I0121 16:41:14.783106 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:41:14 crc kubenswrapper[4739]: E0121 16:41:14.785452 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:41:27 crc kubenswrapper[4739]: I0121 16:41:27.783542 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:41:27 crc kubenswrapper[4739]: E0121 16:41:27.784386 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:41:38 crc kubenswrapper[4739]: I0121 16:41:38.789729 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:41:38 crc kubenswrapper[4739]: E0121 16:41:38.790426 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:41:52 crc kubenswrapper[4739]: I0121 16:41:52.783012 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:41:52 crc kubenswrapper[4739]: E0121 16:41:52.783769 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.807458 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:41:54 crc kubenswrapper[4739]: E0121 16:41:54.808273 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="extract-content" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.808287 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="extract-content" Jan 21 16:41:54 crc kubenswrapper[4739]: E0121 16:41:54.808313 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="registry-server" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.808319 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="registry-server" Jan 21 16:41:54 crc kubenswrapper[4739]: E0121 16:41:54.808339 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="extract-utilities" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.808346 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="extract-utilities" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.808514 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="registry-server" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.809900 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.824866 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdwtf\" (UniqueName: \"kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.825046 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.825180 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.836965 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.926235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdwtf\" (UniqueName: \"kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.926357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.926421 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.926942 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.927025 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.957662 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdwtf\" (UniqueName: \"kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:55 crc kubenswrapper[4739]: I0121 16:41:55.128201 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:56 crc kubenswrapper[4739]: I0121 16:41:56.123967 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:41:56 crc kubenswrapper[4739]: I0121 16:41:56.640639 4739 generic.go:334] "Generic (PLEG): container finished" podID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerID="a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce" exitCode=0 Jan 21 16:41:56 crc kubenswrapper[4739]: I0121 16:41:56.640792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerDied","Data":"a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce"} Jan 21 16:41:56 crc kubenswrapper[4739]: I0121 16:41:56.640949 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerStarted","Data":"70b6c459eb7385ab8a11058aacfa2a1cf409b466af4e843f0b318ee26fc620c0"} Jan 21 16:41:58 crc kubenswrapper[4739]: I0121 16:41:58.661294 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerStarted","Data":"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4"} Jan 21 16:42:01 crc kubenswrapper[4739]: I0121 16:42:01.687209 4739 generic.go:334] "Generic (PLEG): container finished" podID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerID="04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4" exitCode=0 Jan 21 16:42:01 crc kubenswrapper[4739]: I0121 16:42:01.687333 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerDied","Data":"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4"} Jan 21 16:42:02 crc kubenswrapper[4739]: I0121 16:42:02.698893 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerStarted","Data":"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819"} Jan 21 16:42:02 crc kubenswrapper[4739]: I0121 16:42:02.727523 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xws7s" podStartSLOduration=3.267161946 podStartE2EDuration="8.727500159s" podCreationTimestamp="2026-01-21 16:41:54 +0000 UTC" firstStartedPulling="2026-01-21 16:41:56.642838577 +0000 UTC m=+4548.333544841" lastFinishedPulling="2026-01-21 16:42:02.10317679 +0000 UTC m=+4553.793883054" observedRunningTime="2026-01-21 16:42:02.722546204 +0000 UTC m=+4554.413252478" watchObservedRunningTime="2026-01-21 16:42:02.727500159 +0000 UTC m=+4554.418206423" Jan 21 16:42:05 crc kubenswrapper[4739]: I0121 16:42:05.362355 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:05 crc kubenswrapper[4739]: I0121 16:42:05.363231 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:05 crc kubenswrapper[4739]: I0121 16:42:05.375104 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:42:05 crc kubenswrapper[4739]: E0121 16:42:05.375368 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:42:06 crc kubenswrapper[4739]: I0121 16:42:06.592219 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xws7s" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" probeResult="failure" output=< Jan 21 16:42:06 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:42:06 crc kubenswrapper[4739]: > Jan 21 16:42:16 crc kubenswrapper[4739]: I0121 16:42:16.194094 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xws7s" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" probeResult="failure" output=< Jan 21 16:42:16 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:42:16 crc kubenswrapper[4739]: > Jan 21 16:42:16 crc kubenswrapper[4739]: I0121 16:42:16.783845 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:42:16 crc kubenswrapper[4739]: E0121 16:42:16.784165 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:42:25 crc kubenswrapper[4739]: I0121 16:42:25.178704 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:25 crc kubenswrapper[4739]: I0121 16:42:25.229850 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:26 crc kubenswrapper[4739]: I0121 16:42:26.011061 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:42:26 crc kubenswrapper[4739]: I0121 16:42:26.588807 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xws7s" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" containerID="cri-o://b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819" gracePeriod=2 Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.363563 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.364941 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities\") pod \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.364989 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdwtf\" (UniqueName: \"kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf\") pod \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.365151 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content\") pod \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.365655 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities" (OuterVolumeSpecName: "utilities") pod "b93a3dfd-670c-4b4d-9fbc-630333be67e6" (UID: "b93a3dfd-670c-4b4d-9fbc-630333be67e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.374157 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf" (OuterVolumeSpecName: "kube-api-access-zdwtf") pod "b93a3dfd-670c-4b4d-9fbc-630333be67e6" (UID: "b93a3dfd-670c-4b4d-9fbc-630333be67e6"). InnerVolumeSpecName "kube-api-access-zdwtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.467367 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.467407 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdwtf\" (UniqueName: \"kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.496571 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b93a3dfd-670c-4b4d-9fbc-630333be67e6" (UID: "b93a3dfd-670c-4b4d-9fbc-630333be67e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.568593 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.596007 4739 generic.go:334] "Generic (PLEG): container finished" podID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerID="b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819" exitCode=0 Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.596069 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerDied","Data":"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819"} Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.596101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerDied","Data":"70b6c459eb7385ab8a11058aacfa2a1cf409b466af4e843f0b318ee26fc620c0"} Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.596125 4739 scope.go:117] "RemoveContainer" containerID="b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.596268 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.624266 4739 scope.go:117] "RemoveContainer" containerID="04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.654018 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.663374 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.667683 4739 scope.go:117] "RemoveContainer" containerID="a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.697740 4739 scope.go:117] "RemoveContainer" containerID="b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819" Jan 21 16:42:27 crc kubenswrapper[4739]: E0121 16:42:27.700749 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819\": container with ID starting with b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819 not found: ID does not exist" containerID="b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.703156 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819"} err="failed to get container status \"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819\": rpc error: code = NotFound desc = could not find container \"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819\": container with ID starting with b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819 not found: ID does not exist" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.703203 4739 scope.go:117] "RemoveContainer" containerID="04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4" Jan 21 16:42:27 crc kubenswrapper[4739]: E0121 16:42:27.703745 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4\": container with ID starting with 04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4 not found: ID does not exist" containerID="04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.703776 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4"} err="failed to get container status \"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4\": rpc error: code = NotFound desc = could not find container \"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4\": container with ID starting with 04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4 not found: ID does not exist" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.703795 4739 scope.go:117] "RemoveContainer" containerID="a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce" Jan 21 16:42:27 crc kubenswrapper[4739]: E0121 16:42:27.704232 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce\": container with ID starting with a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce not found: ID does not exist" containerID="a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.704263 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce"} err="failed to get container status \"a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce\": rpc error: code = NotFound desc = could not find container \"a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce\": container with ID starting with a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce not found: ID does not exist" Jan 21 16:42:28 crc kubenswrapper[4739]: I0121 16:42:28.795326 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" path="/var/lib/kubelet/pods/b93a3dfd-670c-4b4d-9fbc-630333be67e6/volumes" Jan 21 16:42:29 crc kubenswrapper[4739]: I0121 16:42:29.783733 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:42:29 crc kubenswrapper[4739]: E0121 16:42:29.784236 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:42:43 crc kubenswrapper[4739]: I0121 16:42:43.783289 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:42:43 crc kubenswrapper[4739]: E0121 16:42:43.784934 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:42:58 crc kubenswrapper[4739]: I0121 16:42:58.791412 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:42:58 crc kubenswrapper[4739]: E0121 16:42:58.792311 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:43:09 crc kubenswrapper[4739]: I0121 16:43:09.783252 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:43:09 crc kubenswrapper[4739]: E0121 16:43:09.784067 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.273063 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:14 crc kubenswrapper[4739]: E0121 16:43:14.278732 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="extract-utilities" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.278874 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="extract-utilities" Jan 21 16:43:14 crc kubenswrapper[4739]: E0121 16:43:14.278971 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.279050 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" Jan 21 16:43:14 crc kubenswrapper[4739]: E0121 16:43:14.279159 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="extract-content" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.279276 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="extract-content" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.279634 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.281615 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.283338 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.299115 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.299303 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.299419 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrphp\" (UniqueName: \"kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.402000 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.402726 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.402882 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrphp\" (UniqueName: \"kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.402687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.403326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.441573 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrphp\" (UniqueName: \"kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.600581 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:15 crc kubenswrapper[4739]: I0121 16:43:15.198022 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:16 crc kubenswrapper[4739]: I0121 16:43:16.065431 4739 generic.go:334] "Generic (PLEG): container finished" podID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerID="96b025c10e1d83cbf8222df07598bc1fe08f214cfa164b986549d30dd9d5fb03" exitCode=0 Jan 21 16:43:16 crc kubenswrapper[4739]: I0121 16:43:16.065486 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerDied","Data":"96b025c10e1d83cbf8222df07598bc1fe08f214cfa164b986549d30dd9d5fb03"} Jan 21 16:43:16 crc kubenswrapper[4739]: I0121 16:43:16.065778 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerStarted","Data":"c67644d58a633f259594bea6cec5c38d3f7f7f50f4dddc04cee43c6e54214f06"} Jan 21 16:43:17 crc kubenswrapper[4739]: I0121 16:43:17.075897 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerStarted","Data":"58644ecb0d0bb366efb9dc57bb6d4288f9baf21f573f3b6c3d4dfec3aad34fc4"} Jan 21 16:43:18 crc kubenswrapper[4739]: I0121 16:43:18.088459 4739 generic.go:334] "Generic (PLEG): container finished" podID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerID="58644ecb0d0bb366efb9dc57bb6d4288f9baf21f573f3b6c3d4dfec3aad34fc4" exitCode=0 Jan 21 16:43:18 crc kubenswrapper[4739]: I0121 16:43:18.088520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerDied","Data":"58644ecb0d0bb366efb9dc57bb6d4288f9baf21f573f3b6c3d4dfec3aad34fc4"} Jan 21 16:43:18 crc kubenswrapper[4739]: I0121 16:43:18.091914 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:43:19 crc kubenswrapper[4739]: I0121 16:43:19.101072 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerStarted","Data":"aac1ff06d145015b781fb91b9860cd3495fba676debf400470293708044c04bf"} Jan 21 16:43:19 crc kubenswrapper[4739]: I0121 16:43:19.130674 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5nrf" podStartSLOduration=2.715999298 podStartE2EDuration="5.130657041s" podCreationTimestamp="2026-01-21 16:43:14 +0000 UTC" firstStartedPulling="2026-01-21 16:43:16.068693214 +0000 UTC m=+4627.759399478" lastFinishedPulling="2026-01-21 16:43:18.483350957 +0000 UTC m=+4630.174057221" observedRunningTime="2026-01-21 16:43:19.11849874 +0000 UTC m=+4630.809205014" watchObservedRunningTime="2026-01-21 16:43:19.130657041 +0000 UTC m=+4630.821363305" Jan 21 16:43:23 crc kubenswrapper[4739]: I0121 16:43:23.783173 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:43:23 crc kubenswrapper[4739]: E0121 16:43:23.784016 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:43:24 crc kubenswrapper[4739]: I0121 16:43:24.601026 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:24 crc kubenswrapper[4739]: I0121 16:43:24.602109 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:24 crc kubenswrapper[4739]: I0121 16:43:24.674292 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:25 crc kubenswrapper[4739]: I0121 16:43:25.208885 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:25 crc kubenswrapper[4739]: I0121 16:43:25.253513 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:27 crc kubenswrapper[4739]: I0121 16:43:27.184253 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5nrf" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="registry-server" containerID="cri-o://aac1ff06d145015b781fb91b9860cd3495fba676debf400470293708044c04bf" gracePeriod=2 Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.193886 4739 generic.go:334] "Generic (PLEG): container finished" podID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerID="aac1ff06d145015b781fb91b9860cd3495fba676debf400470293708044c04bf" exitCode=0 Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.194432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerDied","Data":"aac1ff06d145015b781fb91b9860cd3495fba676debf400470293708044c04bf"} Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.432543 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.563065 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrphp\" (UniqueName: \"kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp\") pod \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.564378 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content\") pod \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.564444 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities\") pod \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.565322 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities" (OuterVolumeSpecName: "utilities") pod "9f3a95fd-1ff9-497e-8989-06e2ae4d6642" (UID: "9f3a95fd-1ff9-497e-8989-06e2ae4d6642"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.570563 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp" (OuterVolumeSpecName: "kube-api-access-xrphp") pod "9f3a95fd-1ff9-497e-8989-06e2ae4d6642" (UID: "9f3a95fd-1ff9-497e-8989-06e2ae4d6642"). InnerVolumeSpecName "kube-api-access-xrphp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.592077 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f3a95fd-1ff9-497e-8989-06e2ae4d6642" (UID: "9f3a95fd-1ff9-497e-8989-06e2ae4d6642"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.667648 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.667693 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.667705 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrphp\" (UniqueName: \"kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp\") on node \"crc\" DevicePath \"\"" Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.208405 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerDied","Data":"c67644d58a633f259594bea6cec5c38d3f7f7f50f4dddc04cee43c6e54214f06"} Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.208469 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.209979 4739 scope.go:117] "RemoveContainer" containerID="aac1ff06d145015b781fb91b9860cd3495fba676debf400470293708044c04bf" Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.244162 4739 scope.go:117] "RemoveContainer" containerID="58644ecb0d0bb366efb9dc57bb6d4288f9baf21f573f3b6c3d4dfec3aad34fc4" Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.247906 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.262163 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.277021 4739 scope.go:117] "RemoveContainer" containerID="96b025c10e1d83cbf8222df07598bc1fe08f214cfa164b986549d30dd9d5fb03" Jan 21 16:43:30 crc kubenswrapper[4739]: I0121 16:43:30.794456 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" path="/var/lib/kubelet/pods/9f3a95fd-1ff9-497e-8989-06e2ae4d6642/volumes" Jan 21 16:43:34 crc kubenswrapper[4739]: I0121 16:43:34.783249 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:43:34 crc kubenswrapper[4739]: E0121 16:43:34.784159 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:43:45 crc kubenswrapper[4739]: I0121 16:43:45.782699 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:43:45 crc kubenswrapper[4739]: E0121 16:43:45.783449 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:43:56 crc kubenswrapper[4739]: I0121 16:43:56.782868 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:43:56 crc kubenswrapper[4739]: E0121 16:43:56.783532 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:44:10 crc kubenswrapper[4739]: I0121 16:44:10.782845 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:44:10 crc kubenswrapper[4739]: E0121 16:44:10.783715 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:44:23 crc kubenswrapper[4739]: I0121 16:44:23.782904 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:44:23 crc kubenswrapper[4739]: E0121 16:44:23.783788 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:44:24 crc kubenswrapper[4739]: I0121 16:44:24.724159 4739 generic.go:334] "Generic (PLEG): container finished" podID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" containerID="91264377cc226a97644592a9e3534ea7cfd856051503a1a6f58022fd4258b937" exitCode=1 Jan 21 16:44:24 crc kubenswrapper[4739]: I0121 16:44:24.724217 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"156e0f25-edfe-462a-ae5f-9f5642bef8bb","Type":"ContainerDied","Data":"91264377cc226a97644592a9e3534ea7cfd856051503a1a6f58022fd4258b937"} Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.091296 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123450 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123579 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123602 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75dsx\" (UniqueName: \"kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123765 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123844 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123886 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123960 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123996 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.126190 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data" (OuterVolumeSpecName: "config-data") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.128380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.136267 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.137619 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx" (OuterVolumeSpecName: "kube-api-access-75dsx") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "kube-api-access-75dsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.145803 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.162954 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.176266 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.185698 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.201249 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227006 4739 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227039 4739 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227051 4739 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227091 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227104 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227117 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75dsx\" (UniqueName: \"kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.228369 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.228392 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.228404 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.251400 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.330591 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.741540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"156e0f25-edfe-462a-ae5f-9f5642bef8bb","Type":"ContainerDied","Data":"6b7011d1322270b6bb31700f56780b7019d2f7d08e1e0990c87f1bbbc0be3201"} Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.741561 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.741590 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b7011d1322270b6bb31700f56780b7019d2f7d08e1e0990c87f1bbbc0be3201" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.836952 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:44:34 crc kubenswrapper[4739]: E0121 16:44:34.837965 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="extract-content" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.837983 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="extract-content" Jan 21 16:44:34 crc kubenswrapper[4739]: E0121 16:44:34.837999 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838007 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:44:34 crc kubenswrapper[4739]: E0121 16:44:34.838020 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="registry-server" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838029 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="registry-server" Jan 21 16:44:34 crc kubenswrapper[4739]: E0121 16:44:34.838055 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="extract-utilities" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838063 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="extract-utilities" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838278 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838294 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="registry-server" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838993 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.858188 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.882370 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-c9nsw" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.892420 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj889\" (UniqueName: \"kubernetes.io/projected/138396ea-a681-4317-beb7-bea153d87be8-kube-api-access-tj889\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.892836 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.994165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.994334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj889\" (UniqueName: \"kubernetes.io/projected/138396ea-a681-4317-beb7-bea153d87be8-kube-api-access-tj889\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.995261 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.016288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj889\" (UniqueName: \"kubernetes.io/projected/138396ea-a681-4317-beb7-bea153d87be8-kube-api-access-tj889\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.031442 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.199959 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.646394 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:44:35 crc kubenswrapper[4739]: W0121 16:44:35.660375 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod138396ea_a681_4317_beb7_bea153d87be8.slice/crio-40cf879ab0ef9ab2e8e66ffec8bf2d1095c018b44681d11cd547fe451dc6c726 WatchSource:0}: Error finding container 40cf879ab0ef9ab2e8e66ffec8bf2d1095c018b44681d11cd547fe451dc6c726: Status 404 returned error can't find the container with id 40cf879ab0ef9ab2e8e66ffec8bf2d1095c018b44681d11cd547fe451dc6c726 Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.783209 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:44:35 crc kubenswrapper[4739]: E0121 16:44:35.783574 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.816281 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"138396ea-a681-4317-beb7-bea153d87be8","Type":"ContainerStarted","Data":"40cf879ab0ef9ab2e8e66ffec8bf2d1095c018b44681d11cd547fe451dc6c726"} Jan 21 16:44:36 crc kubenswrapper[4739]: I0121 16:44:36.831729 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"138396ea-a681-4317-beb7-bea153d87be8","Type":"ContainerStarted","Data":"43a1c565c267d483b29bad6ac772de02350e626c88ca1de15e4b9176b2896bed"} Jan 21 16:44:46 crc kubenswrapper[4739]: I0121 16:44:46.783470 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:44:46 crc kubenswrapper[4739]: E0121 16:44:46.784279 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.210677 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=25.324513588 podStartE2EDuration="26.210625113s" podCreationTimestamp="2026-01-21 16:44:34 +0000 UTC" firstStartedPulling="2026-01-21 16:44:35.663388349 +0000 UTC m=+4707.354094633" lastFinishedPulling="2026-01-21 16:44:36.549499894 +0000 UTC m=+4708.240206158" observedRunningTime="2026-01-21 16:44:36.857212872 +0000 UTC m=+4708.547919136" watchObservedRunningTime="2026-01-21 16:45:00.210625113 +0000 UTC m=+4731.901331387" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.216352 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs"] Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.217782 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.220880 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.221358 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.238470 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs"] Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.317384 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7q9\" (UniqueName: \"kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.318139 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.319210 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.422102 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.422308 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.422358 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc7q9\" (UniqueName: \"kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.423246 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.429030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.438366 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc7q9\" (UniqueName: \"kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.545586 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:01 crc kubenswrapper[4739]: I0121 16:45:01.092266 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs"] Jan 21 16:45:01 crc kubenswrapper[4739]: I0121 16:45:01.782853 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:45:01 crc kubenswrapper[4739]: E0121 16:45:01.783400 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:45:02 crc kubenswrapper[4739]: I0121 16:45:02.076047 4739 generic.go:334] "Generic (PLEG): container finished" podID="da12989c-3c7b-4620-aef9-bb7ff6ba26b0" containerID="d91f9dd5c83eaaea3f18563fcd72191b0954acb06e332c4d592cedb3624b2ae1" exitCode=0 Jan 21 16:45:02 crc kubenswrapper[4739]: I0121 16:45:02.076098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" event={"ID":"da12989c-3c7b-4620-aef9-bb7ff6ba26b0","Type":"ContainerDied","Data":"d91f9dd5c83eaaea3f18563fcd72191b0954acb06e332c4d592cedb3624b2ae1"} Jan 21 16:45:02 crc kubenswrapper[4739]: I0121 16:45:02.076337 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" event={"ID":"da12989c-3c7b-4620-aef9-bb7ff6ba26b0","Type":"ContainerStarted","Data":"18e3c694f9d3eb97c8b5315aec3d0004adda5cdae7e0570f690bfa997abd2840"} Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.539565 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.601194 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume\") pod \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.601273 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc7q9\" (UniqueName: \"kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9\") pod \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.601390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume\") pod \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.602079 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume" (OuterVolumeSpecName: "config-volume") pod "da12989c-3c7b-4620-aef9-bb7ff6ba26b0" (UID: "da12989c-3c7b-4620-aef9-bb7ff6ba26b0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.607267 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9" (OuterVolumeSpecName: "kube-api-access-gc7q9") pod "da12989c-3c7b-4620-aef9-bb7ff6ba26b0" (UID: "da12989c-3c7b-4620-aef9-bb7ff6ba26b0"). InnerVolumeSpecName "kube-api-access-gc7q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.608460 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da12989c-3c7b-4620-aef9-bb7ff6ba26b0" (UID: "da12989c-3c7b-4620-aef9-bb7ff6ba26b0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.703474 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.703522 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc7q9\" (UniqueName: \"kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.703534 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:04 crc kubenswrapper[4739]: I0121 16:45:04.105790 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" event={"ID":"da12989c-3c7b-4620-aef9-bb7ff6ba26b0","Type":"ContainerDied","Data":"18e3c694f9d3eb97c8b5315aec3d0004adda5cdae7e0570f690bfa997abd2840"} Jan 21 16:45:04 crc kubenswrapper[4739]: I0121 16:45:04.105844 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:04 crc kubenswrapper[4739]: I0121 16:45:04.105845 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18e3c694f9d3eb97c8b5315aec3d0004adda5cdae7e0570f690bfa997abd2840" Jan 21 16:45:05 crc kubenswrapper[4739]: I0121 16:45:05.326871 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr"] Jan 21 16:45:05 crc kubenswrapper[4739]: I0121 16:45:05.336183 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr"] Jan 21 16:45:06 crc kubenswrapper[4739]: I0121 16:45:06.794642 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" path="/var/lib/kubelet/pods/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc/volumes" Jan 21 16:45:07 crc kubenswrapper[4739]: I0121 16:45:07.093271 4739 scope.go:117] "RemoveContainer" containerID="dc8a977ecd7f7e2be7f9b5d42a5f6836ba0de9cb20feea63ae4da3d14c5dcf0a" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.584101 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gd2st/must-gather-smrdj"] Jan 21 16:45:08 crc kubenswrapper[4739]: E0121 16:45:08.584908 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da12989c-3c7b-4620-aef9-bb7ff6ba26b0" containerName="collect-profiles" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.584929 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="da12989c-3c7b-4620-aef9-bb7ff6ba26b0" containerName="collect-profiles" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.585210 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="da12989c-3c7b-4620-aef9-bb7ff6ba26b0" containerName="collect-profiles" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.597488 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gd2st/must-gather-smrdj"] Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.597610 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.600181 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gd2st"/"kube-root-ca.crt" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.600266 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gd2st"/"openshift-service-ca.crt" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.600403 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gd2st"/"default-dockercfg-2p6bc" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.630407 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.630615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgq7l\" (UniqueName: \"kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.732399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgq7l\" (UniqueName: \"kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.732510 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.733010 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.754376 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgq7l\" (UniqueName: \"kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.918826 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:09 crc kubenswrapper[4739]: I0121 16:45:09.386989 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gd2st/must-gather-smrdj"] Jan 21 16:45:09 crc kubenswrapper[4739]: W0121 16:45:09.394129 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a63aa7f_39ab_48de_bb46_86db1661dfbf.slice/crio-0a3f9ac5494b8870cf39b2171525b773bca652246ac5ac797f5bb2090f4005ce WatchSource:0}: Error finding container 0a3f9ac5494b8870cf39b2171525b773bca652246ac5ac797f5bb2090f4005ce: Status 404 returned error can't find the container with id 0a3f9ac5494b8870cf39b2171525b773bca652246ac5ac797f5bb2090f4005ce Jan 21 16:45:10 crc kubenswrapper[4739]: I0121 16:45:10.364522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/must-gather-smrdj" event={"ID":"4a63aa7f-39ab-48de-bb46-86db1661dfbf","Type":"ContainerStarted","Data":"0a3f9ac5494b8870cf39b2171525b773bca652246ac5ac797f5bb2090f4005ce"} Jan 21 16:45:15 crc kubenswrapper[4739]: I0121 16:45:15.783081 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:45:15 crc kubenswrapper[4739]: E0121 16:45:15.783943 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:45:17 crc kubenswrapper[4739]: I0121 16:45:17.441141 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/must-gather-smrdj" event={"ID":"4a63aa7f-39ab-48de-bb46-86db1661dfbf","Type":"ContainerStarted","Data":"70e793ae70ed3be2165a96f46f92591284c1b2cb4d56ab3f9a4e3281cd832392"} Jan 21 16:45:18 crc kubenswrapper[4739]: I0121 16:45:18.452305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/must-gather-smrdj" event={"ID":"4a63aa7f-39ab-48de-bb46-86db1661dfbf","Type":"ContainerStarted","Data":"107eef26237f35c1f5bab979a158fce91b0e43c8e7ed5137b7cd6ddc1422aa41"} Jan 21 16:45:18 crc kubenswrapper[4739]: I0121 16:45:18.480781 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gd2st/must-gather-smrdj" podStartSLOduration=2.888518415 podStartE2EDuration="10.480762654s" podCreationTimestamp="2026-01-21 16:45:08 +0000 UTC" firstStartedPulling="2026-01-21 16:45:09.395873086 +0000 UTC m=+4741.086579350" lastFinishedPulling="2026-01-21 16:45:16.988117325 +0000 UTC m=+4748.678823589" observedRunningTime="2026-01-21 16:45:18.471900733 +0000 UTC m=+4750.162606997" watchObservedRunningTime="2026-01-21 16:45:18.480762654 +0000 UTC m=+4750.171468918" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.706302 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gd2st/crc-debug-289bp"] Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.708910 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.859313 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv525\" (UniqueName: \"kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.859762 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.961840 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv525\" (UniqueName: \"kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.962011 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.962688 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.979187 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv525\" (UniqueName: \"kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:24 crc kubenswrapper[4739]: I0121 16:45:24.029245 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:24 crc kubenswrapper[4739]: I0121 16:45:24.507211 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-289bp" event={"ID":"e04df425-39b4-48fc-9b12-ec8b589aff9e","Type":"ContainerStarted","Data":"207745b33a9bb849d9551277e45c9d3a4dd9401569624c202bb316933136eeb0"} Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.391956 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c6c95c866-nplmh_08457213-f4e0-4334-a1b0-a569bb5077ba/barbican-api-log/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.413230 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c6c95c866-nplmh_08457213-f4e0-4334-a1b0-a569bb5077ba/barbican-api/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.459567 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64d4fbc96d-dlgxh_4ea7c1ca-928b-4218-b3da-df8050838259/barbican-keystone-listener-log/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.468494 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64d4fbc96d-dlgxh_4ea7c1ca-928b-4218-b3da-df8050838259/barbican-keystone-listener/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.494430 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b898c7bc9-wlcjc_f3bf76ca-61be-4cbe-b8ce-780502ae0205/barbican-worker-log/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.501621 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b898c7bc9-wlcjc_f3bf76ca-61be-4cbe-b8ce-780502ae0205/barbican-worker/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.553681 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b_47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.588665 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/ceilometer-central-agent/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.589077 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/ceilometer-central-agent/1.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.616114 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/ceilometer-notification-agent/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.625474 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/sg-core/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.635861 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/proxy-httpd/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.656263 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-788g6_faa406e8-9005-4c42-a434-cc5d36dbf56c/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.672651 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg_1b774039-a2a8-4a04-9436-570c76bb8852/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.691853 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_340cac45-4a1b-404b-abf0-24e2eb31980b/cinder-api-log/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.761387 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_340cac45-4a1b-404b-abf0-24e2eb31980b/cinder-api/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.783410 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:45:27 crc kubenswrapper[4739]: E0121 16:45:27.783671 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.902663 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3e7c2005-9f9a-41b3-b7c0-7dc430637ba8/cinder-backup/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.922666 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3e7c2005-9f9a-41b3-b7c0-7dc430637ba8/probe/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.965993 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_27acefc8-6355-40dc-aaa8-84029c626a0b/cinder-scheduler/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.006580 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_27acefc8-6355-40dc-aaa8-84029c626a0b/probe/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.097483 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_7353ecec-24ef-48a5-9046-95c8e0b77de0/cinder-volume/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.117122 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_7353ecec-24ef-48a5-9046-95c8e0b77de0/probe/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.151033 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-sbklq_9559d041-04b3-47c2-8121-b348ad047032/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.192349 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8_c9b66501-25d1-48dd-a7ad-9b98893bcede/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.355460 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c846ff5b9-256zk_5a695c51-4390-4957-8320-d381011ebcf9/dnsmasq-dns/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.370085 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c846ff5b9-256zk_5a695c51-4390-4957-8320-d381011ebcf9/init/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.409324 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_82cfddd4-081e-4b33-82e2-5dbd44a11e56/glance-log/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.441798 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_82cfddd4-081e-4b33-82e2-5dbd44a11e56/glance-httpd/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.461600 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1299ed2d-0e46-46a5-8dd1-89a635cc4356/glance-log/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.486588 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1299ed2d-0e46-46a5-8dd1-89a635cc4356/glance-httpd/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.728532 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-97dd88d6d-7bgrq_cdecd60b-660a-4039-a35b-29fec73c85a7/horizon-log/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.847115 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-97dd88d6d-7bgrq_cdecd60b-660a-4039-a35b-29fec73c85a7/horizon/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.905042 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp_e57ad057-1847-4336-a884-ca693f4ee867/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.952500 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rp7kt_863214f8-2df5-42e2-ba92-293df6d7adaf/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.308536 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-755fb5c478-dt2rg_5e665ce5-7f58-4b17-9ccf-3e641a34eae8/keystone-api/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.333473 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483521-cztpq_dc21193f-dbfb-4e0d-87d6-48f184c466ef/keystone-cron/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.348396 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7a559158-ae1f-4b55-bf71-90061b51b807/kube-state-metrics/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.644044 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9_254da8b1-762d-4c96-a7e1-fe39f6988eac/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.697387 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_1d033dc1-1e44-4e90-8d00-371620e1d520/manila-api-log/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.849896 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_1d033dc1-1e44-4e90-8d00-371620e1d520/manila-api/0.log" Jan 21 16:45:30 crc kubenswrapper[4739]: I0121 16:45:30.159929 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_95d74824-f3a9-4fbb-8ca6-1299ef8f7153/manila-scheduler/0.log" Jan 21 16:45:30 crc kubenswrapper[4739]: I0121 16:45:30.180489 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_95d74824-f3a9-4fbb-8ca6-1299ef8f7153/probe/0.log" Jan 21 16:45:30 crc kubenswrapper[4739]: I0121 16:45:30.475141 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_9af8a439-bfea-4aff-a10f-06abe6ed70dd/manila-share/0.log" Jan 21 16:45:30 crc kubenswrapper[4739]: I0121 16:45:30.548031 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_9af8a439-bfea-4aff-a10f-06abe6ed70dd/probe/0.log" Jan 21 16:45:38 crc kubenswrapper[4739]: I0121 16:45:38.678379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-289bp" event={"ID":"e04df425-39b4-48fc-9b12-ec8b589aff9e","Type":"ContainerStarted","Data":"b482f4f0ee416befc73bbab477f04ace5df7c6f8495cd9bc0d36f52f39201755"} Jan 21 16:45:38 crc kubenswrapper[4739]: I0121 16:45:38.700053 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gd2st/crc-debug-289bp" podStartSLOduration=1.475963543 podStartE2EDuration="15.700034003s" podCreationTimestamp="2026-01-21 16:45:23 +0000 UTC" firstStartedPulling="2026-01-21 16:45:24.066123854 +0000 UTC m=+4755.756830118" lastFinishedPulling="2026-01-21 16:45:38.290194314 +0000 UTC m=+4769.980900578" observedRunningTime="2026-01-21 16:45:38.691562063 +0000 UTC m=+4770.382268327" watchObservedRunningTime="2026-01-21 16:45:38.700034003 +0000 UTC m=+4770.390740267" Jan 21 16:45:39 crc kubenswrapper[4739]: I0121 16:45:39.782778 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:45:40 crc kubenswrapper[4739]: I0121 16:45:40.705390 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c"} Jan 21 16:45:49 crc kubenswrapper[4739]: I0121 16:45:49.017535 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/controller/0.log" Jan 21 16:45:49 crc kubenswrapper[4739]: I0121 16:45:49.030356 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/kube-rbac-proxy/0.log" Jan 21 16:45:49 crc kubenswrapper[4739]: I0121 16:45:49.058792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/controller/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.142020 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.158542 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/reloader/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.164233 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr-metrics/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.177428 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.187719 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy-frr/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.193883 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-frr-files/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.199991 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-reloader/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.212240 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-metrics/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.236940 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-sjv4j_df4966b4-eef0-46d7-a70b-f7108da36b36/frr-k8s-webhook-server/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.261787 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/1.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.279008 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.293991 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6994698-z27sp_ef7118ff-ea20-40ec-aa4d-5711926f4b6c/webhook-server/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.815705 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/speaker/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.825461 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/kube-rbac-proxy/0.log" Jan 21 16:46:02 crc kubenswrapper[4739]: I0121 16:46:02.005235 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_aa850895-9a18-4cff-83f8-bf7eea44559e/memcached/0.log" Jan 21 16:46:02 crc kubenswrapper[4739]: I0121 16:46:02.143918 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-9b578bfdc-tzd9g_91caca26-903d-4f3c-ba18-c31a43c9df73/neutron-api/0.log" Jan 21 16:46:02 crc kubenswrapper[4739]: I0121 16:46:02.195152 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-9b578bfdc-tzd9g_91caca26-903d-4f3c-ba18-c31a43c9df73/neutron-httpd/0.log" Jan 21 16:46:02 crc kubenswrapper[4739]: I0121 16:46:02.222779 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6_0a2c5efb-5467-4985-8526-56adf203eef0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:02 crc kubenswrapper[4739]: I0121 16:46:02.445755 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_09a86707-0931-4a2a-961c-6109688ed7e0/nova-api-log/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.037247 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_09a86707-0931-4a2a-961c-6109688ed7e0/nova-api-api/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.141578 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ef6e43f8-c2d1-4991-992b-30ebd3fc66cf/nova-cell0-conductor-conductor/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.226959 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_05cfdc9a-d9ef-45eb-99dd-a7393fdca241/nova-cell1-conductor-conductor/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.321086 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_52afdd4f-bb93-4cc6-b074-7391852099ee/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.388969 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr_9f1cbca1-44a3-4825-b255-dfb219fdbda7/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.468141 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06/nova-metadata-log/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.113237 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06/nova-metadata-metadata/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.280406 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a2569778-376b-41fc-bdca-3bb914efd1b1/nova-scheduler-scheduler/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.301878 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d6502a4d-1f62-4f00-8c3f-7e51b14b616a/galera/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.315634 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d6502a4d-1f62-4f00-8c3f-7e51b14b616a/mysql-bootstrap/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.346846 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d9c86609-18a0-47cb-8ce3-863d829a2f65/galera/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.358572 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d9c86609-18a0-47cb-8ce3-863d829a2f65/mysql-bootstrap/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.370134 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8f733769-d3f8-4ced-be3b-cbb84339dac5/openstackclient/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.383598 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-g28pm_614c729f-eac4-4445-bfdd-750236431c69/ovn-controller/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.395806 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5sdng_d9e43d4c-0e56-42cb-9f23-e225a7451d52/openstack-network-exporter/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.412795 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tl2z8_30ab2564-7d97-4b59-8687-376b2e37fba0/ovsdb-server/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.428555 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tl2z8_30ab2564-7d97-4b59-8687-376b2e37fba0/ovs-vswitchd/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.442521 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tl2z8_30ab2564-7d97-4b59-8687-376b2e37fba0/ovsdb-server-init/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.490551 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-8z5wj_bf8a2940-3bba-4811-a552-01919ddcdde1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.502978 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3600d295-3864-446c-a407-b1b80c2a2c50/ovn-northd/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.511085 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3600d295-3864-446c-a407-b1b80c2a2c50/openstack-network-exporter/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.531301 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3651185e-676d-492e-99cf-26ea8a5b9de6/ovsdbserver-nb/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.536607 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3651185e-676d-492e-99cf-26ea8a5b9de6/openstack-network-exporter/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.552640 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2126ac0e-f6f2-4bfb-b364-1ef544fb6d72/ovsdbserver-sb/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.564560 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2126ac0e-f6f2-4bfb-b364-1ef544fb6d72/openstack-network-exporter/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.657938 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bc6f68bbd-rrpp7_ba66d45b-42e9-4ea8-91dc-9925178eaa65/placement-log/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.749581 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bc6f68bbd-rrpp7_ba66d45b-42e9-4ea8-91dc-9925178eaa65/placement-api/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.778509 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_23fcbb0d-682e-40b5-9921-f484672af568/rabbitmq/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.787160 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_23fcbb0d-682e-40b5-9921-f484672af568/setup-container/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.822041 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a/rabbitmq/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.827462 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a/setup-container/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.847894 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv_1942d825-3f2c-4555-9212-4771283ad4cb/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.860284 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm_26f6f5f4-900a-4a62-af65-9a20d9b30008/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.879011 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-z454s_056d99bf-bfdf-40d6-b888-0390a1674524/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.891936 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-xkcn4_c9035d12-0cb2-4d4c-a202-984fdb561167/ssh-known-hosts-edpm-deployment/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.955618 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_156e0f25-edfe-462a-ae5f-9f5642bef8bb/tempest-tests-tempest-tests-runner/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.963009 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_138396ea-a681-4317-beb7-bea153d87be8/test-operator-logs-container/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.977680 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx_e70c9a47-9608-42ee-b307-be70bb44d50b/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:22 crc kubenswrapper[4739]: I0121 16:46:22.209014 4739 generic.go:334] "Generic (PLEG): container finished" podID="e04df425-39b4-48fc-9b12-ec8b589aff9e" containerID="b482f4f0ee416befc73bbab477f04ace5df7c6f8495cd9bc0d36f52f39201755" exitCode=0 Jan 21 16:46:22 crc kubenswrapper[4739]: I0121 16:46:22.209132 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-289bp" event={"ID":"e04df425-39b4-48fc-9b12-ec8b589aff9e","Type":"ContainerDied","Data":"b482f4f0ee416befc73bbab477f04ace5df7c6f8495cd9bc0d36f52f39201755"} Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.316108 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.352342 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-289bp"] Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.362879 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-289bp"] Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.423927 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv525\" (UniqueName: \"kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525\") pod \"e04df425-39b4-48fc-9b12-ec8b589aff9e\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.424088 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host\") pod \"e04df425-39b4-48fc-9b12-ec8b589aff9e\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.424747 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host" (OuterVolumeSpecName: "host") pod "e04df425-39b4-48fc-9b12-ec8b589aff9e" (UID: "e04df425-39b4-48fc-9b12-ec8b589aff9e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.443886 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525" (OuterVolumeSpecName: "kube-api-access-pv525") pod "e04df425-39b4-48fc-9b12-ec8b589aff9e" (UID: "e04df425-39b4-48fc-9b12-ec8b589aff9e"). InnerVolumeSpecName "kube-api-access-pv525". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.526587 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv525\" (UniqueName: \"kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.526631 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.228513 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="207745b33a9bb849d9551277e45c9d3a4dd9401569624c202bb316933136eeb0" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.228924 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.542771 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gd2st/crc-debug-sqhzk"] Jan 21 16:46:24 crc kubenswrapper[4739]: E0121 16:46:24.543241 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04df425-39b4-48fc-9b12-ec8b589aff9e" containerName="container-00" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.543255 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04df425-39b4-48fc-9b12-ec8b589aff9e" containerName="container-00" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.543516 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04df425-39b4-48fc-9b12-ec8b589aff9e" containerName="container-00" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.544238 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.559053 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttkgb\" (UniqueName: \"kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.559120 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.661920 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.661970 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttkgb\" (UniqueName: \"kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.662033 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.686735 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttkgb\" (UniqueName: \"kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.793660 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e04df425-39b4-48fc-9b12-ec8b589aff9e" path="/var/lib/kubelet/pods/e04df425-39b4-48fc-9b12-ec8b589aff9e/volumes" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.867540 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:25 crc kubenswrapper[4739]: I0121 16:46:25.237687 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" event={"ID":"e55ff3ff-fc07-405a-a890-d3340ccdeefe","Type":"ContainerStarted","Data":"6fa029964a57617bab2baa300f1c6608b6ef09e3f74d48cead0cc6f18c017d8b"} Jan 21 16:46:25 crc kubenswrapper[4739]: I0121 16:46:25.238058 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" event={"ID":"e55ff3ff-fc07-405a-a890-d3340ccdeefe","Type":"ContainerStarted","Data":"dae72fb60a42168dd7c115c976a0ec7e59e18ecf98dc4968042f46b3badc18c2"} Jan 21 16:46:25 crc kubenswrapper[4739]: I0121 16:46:25.255479 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" podStartSLOduration=1.255453004 podStartE2EDuration="1.255453004s" podCreationTimestamp="2026-01-21 16:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:46:25.248601917 +0000 UTC m=+4816.939308181" watchObservedRunningTime="2026-01-21 16:46:25.255453004 +0000 UTC m=+4816.946159268" Jan 21 16:46:26 crc kubenswrapper[4739]: I0121 16:46:26.247048 4739 generic.go:334] "Generic (PLEG): container finished" podID="e55ff3ff-fc07-405a-a890-d3340ccdeefe" containerID="6fa029964a57617bab2baa300f1c6608b6ef09e3f74d48cead0cc6f18c017d8b" exitCode=0 Jan 21 16:46:26 crc kubenswrapper[4739]: I0121 16:46:26.247100 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" event={"ID":"e55ff3ff-fc07-405a-a890-d3340ccdeefe","Type":"ContainerDied","Data":"6fa029964a57617bab2baa300f1c6608b6ef09e3f74d48cead0cc6f18c017d8b"} Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.374957 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.413217 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-sqhzk"] Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.422493 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-sqhzk"] Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.532476 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttkgb\" (UniqueName: \"kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb\") pod \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.532554 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host\") pod \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.532758 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host" (OuterVolumeSpecName: "host") pod "e55ff3ff-fc07-405a-a890-d3340ccdeefe" (UID: "e55ff3ff-fc07-405a-a890-d3340ccdeefe"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.533067 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.540837 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb" (OuterVolumeSpecName: "kube-api-access-ttkgb") pod "e55ff3ff-fc07-405a-a890-d3340ccdeefe" (UID: "e55ff3ff-fc07-405a-a890-d3340ccdeefe"). InnerVolumeSpecName "kube-api-access-ttkgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.635316 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttkgb\" (UniqueName: \"kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.277480 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae72fb60a42168dd7c115c976a0ec7e59e18ecf98dc4968042f46b3badc18c2" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.277610 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.558481 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gd2st/crc-debug-kh6tt"] Jan 21 16:46:28 crc kubenswrapper[4739]: E0121 16:46:28.559060 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55ff3ff-fc07-405a-a890-d3340ccdeefe" containerName="container-00" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.559080 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55ff3ff-fc07-405a-a890-d3340ccdeefe" containerName="container-00" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.559271 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55ff3ff-fc07-405a-a890-d3340ccdeefe" containerName="container-00" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.560127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.656193 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6q8h\" (UniqueName: \"kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.656330 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.758082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6q8h\" (UniqueName: \"kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.758462 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.758579 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.791775 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6q8h\" (UniqueName: \"kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.803841 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55ff3ff-fc07-405a-a890-d3340ccdeefe" path="/var/lib/kubelet/pods/e55ff3ff-fc07-405a-a890-d3340ccdeefe/volumes" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.886213 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: W0121 16:46:28.910727 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddda030_3df5_4c79_822b_6c027ffcebfd.slice/crio-d814b4f6dfd9e6a33a0ff001c32d2ae4919adb833a21342adc6ea4b482e25707 WatchSource:0}: Error finding container d814b4f6dfd9e6a33a0ff001c32d2ae4919adb833a21342adc6ea4b482e25707: Status 404 returned error can't find the container with id d814b4f6dfd9e6a33a0ff001c32d2ae4919adb833a21342adc6ea4b482e25707 Jan 21 16:46:29 crc kubenswrapper[4739]: I0121 16:46:29.287225 4739 generic.go:334] "Generic (PLEG): container finished" podID="5ddda030-3df5-4c79-822b-6c027ffcebfd" containerID="7208ccb5b7748fcbeba1ce61361b30eed11e4df24f1985f20b9b09da0cb246d0" exitCode=0 Jan 21 16:46:29 crc kubenswrapper[4739]: I0121 16:46:29.287532 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" event={"ID":"5ddda030-3df5-4c79-822b-6c027ffcebfd","Type":"ContainerDied","Data":"7208ccb5b7748fcbeba1ce61361b30eed11e4df24f1985f20b9b09da0cb246d0"} Jan 21 16:46:29 crc kubenswrapper[4739]: I0121 16:46:29.287576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" event={"ID":"5ddda030-3df5-4c79-822b-6c027ffcebfd","Type":"ContainerStarted","Data":"d814b4f6dfd9e6a33a0ff001c32d2ae4919adb833a21342adc6ea4b482e25707"} Jan 21 16:46:29 crc kubenswrapper[4739]: I0121 16:46:29.332698 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-kh6tt"] Jan 21 16:46:29 crc kubenswrapper[4739]: I0121 16:46:29.341940 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-kh6tt"] Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.423542 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.592455 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6q8h\" (UniqueName: \"kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h\") pod \"5ddda030-3df5-4c79-822b-6c027ffcebfd\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.593003 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host\") pod \"5ddda030-3df5-4c79-822b-6c027ffcebfd\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.593052 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host" (OuterVolumeSpecName: "host") pod "5ddda030-3df5-4c79-822b-6c027ffcebfd" (UID: "5ddda030-3df5-4c79-822b-6c027ffcebfd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.593530 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.604056 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h" (OuterVolumeSpecName: "kube-api-access-g6q8h") pod "5ddda030-3df5-4c79-822b-6c027ffcebfd" (UID: "5ddda030-3df5-4c79-822b-6c027ffcebfd"). InnerVolumeSpecName "kube-api-access-g6q8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.695259 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6q8h\" (UniqueName: \"kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.794896 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddda030-3df5-4c79-822b-6c027ffcebfd" path="/var/lib/kubelet/pods/5ddda030-3df5-4c79-822b-6c027ffcebfd/volumes" Jan 21 16:46:31 crc kubenswrapper[4739]: I0121 16:46:31.320781 4739 scope.go:117] "RemoveContainer" containerID="7208ccb5b7748fcbeba1ce61361b30eed11e4df24f1985f20b9b09da0cb246d0" Jan 21 16:46:31 crc kubenswrapper[4739]: I0121 16:46:31.320972 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:32 crc kubenswrapper[4739]: I0121 16:46:32.954730 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.002071 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.029863 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.064018 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.076507 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.077209 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.087447 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/extract/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.093792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/util/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.102682 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/pull/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.116896 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.164993 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.177954 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.180267 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.191741 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.197076 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.223520 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.497759 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.516782 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.516968 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.541210 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.609113 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.631870 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.671852 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.690474 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.717644 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.728211 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.767491 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.788978 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.842362 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.853703 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.854720 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.874077 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.875635 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.910746 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/1.log" Jan 21 16:46:34 crc kubenswrapper[4739]: I0121 16:46:34.051465 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/0.log" Jan 21 16:46:34 crc kubenswrapper[4739]: I0121 16:46:34.096376 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.545853 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.553960 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ggtdm_50c62dc2-9ca0-4c34-9043-e5a859e7d931/registry-server/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.571543 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.616057 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.635971 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.648214 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.659283 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.670433 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.682954 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.682998 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.696943 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.748711 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.758403 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.760074 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.770092 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.771092 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/0.log" Jan 21 16:46:41 crc kubenswrapper[4739]: I0121 16:46:41.157068 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-685vd_ef6a19dc-ef35-4ea2-9b8d-1d25c8903664/control-plane-machine-set-operator/0.log" Jan 21 16:46:41 crc kubenswrapper[4739]: I0121 16:46:41.170891 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/kube-rbac-proxy/0.log" Jan 21 16:46:41 crc kubenswrapper[4739]: I0121 16:46:41.180604 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/machine-api-operator/0.log" Jan 21 16:47:18 crc kubenswrapper[4739]: I0121 16:47:18.327274 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/1.log" Jan 21 16:47:18 crc kubenswrapper[4739]: I0121 16:47:18.360869 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/0.log" Jan 21 16:47:18 crc kubenswrapper[4739]: I0121 16:47:18.371563 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/1.log" Jan 21 16:47:18 crc kubenswrapper[4739]: I0121 16:47:18.374942 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/0.log" Jan 21 16:47:18 crc kubenswrapper[4739]: I0121 16:47:18.383243 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-74xhs_4ec8cb71-79f4-4c17-9519-94a7d2f5d25a/cert-manager-webhook/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.683252 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7nprl_d1e5428b-c7db-4df9-8fad-fcfa89827ea4/nmstate-console-plugin/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.700612 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-srg8z_9460d049-7edd-4e18-a153-2b0bc3218a8a/nmstate-handler/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.711543 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/nmstate-metrics/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.720232 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/kube-rbac-proxy/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.742574 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hrngk_61c58953-6280-4a68-858f-056eed7e5c65/nmstate-operator/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.754949 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-fdf2j_5812c445-156f-48d3-aa24-130b329cccfe/nmstate-webhook/0.log" Jan 21 16:47:35 crc kubenswrapper[4739]: I0121 16:47:35.265245 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/controller/0.log" Jan 21 16:47:35 crc kubenswrapper[4739]: I0121 16:47:35.270932 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/kube-rbac-proxy/0.log" Jan 21 16:47:35 crc kubenswrapper[4739]: I0121 16:47:35.293340 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/controller/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.777726 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.798017 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/reloader/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.807341 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr-metrics/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.821690 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.826265 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy-frr/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.844445 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-frr-files/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.856519 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-reloader/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.864794 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-metrics/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.872757 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-sjv4j_df4966b4-eef0-46d7-a70b-f7108da36b36/frr-k8s-webhook-server/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.891162 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/1.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.902039 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.912725 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6994698-z27sp_ef7118ff-ea20-40ec-aa4d-5711926f4b6c/webhook-server/0.log" Jan 21 16:47:37 crc kubenswrapper[4739]: I0121 16:47:37.303924 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/speaker/0.log" Jan 21 16:47:37 crc kubenswrapper[4739]: I0121 16:47:37.311329 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/kube-rbac-proxy/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.397505 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz_fc8fa5f7-74bb-4c53-bfbe-250e6141e58e/extract/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.404708 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz_fc8fa5f7-74bb-4c53-bfbe-250e6141e58e/util/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.415873 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz_fc8fa5f7-74bb-4c53-bfbe-250e6141e58e/pull/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.432467 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq_9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a/extract/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.438720 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq_9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a/util/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.449771 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq_9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a/pull/0.log" Jan 21 16:47:42 crc kubenswrapper[4739]: I0121 16:47:42.325658 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5s9m_67b842e6-f082-4d40-8e57-620003b6cc52/registry-server/0.log" Jan 21 16:47:42 crc kubenswrapper[4739]: I0121 16:47:42.330889 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5s9m_67b842e6-f082-4d40-8e57-620003b6cc52/extract-utilities/0.log" Jan 21 16:47:42 crc kubenswrapper[4739]: I0121 16:47:42.338434 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5s9m_67b842e6-f082-4d40-8e57-620003b6cc52/extract-content/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.189422 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2phqw_730d76de-628a-49ea-ad88-87a719e76750/registry-server/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.195511 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2phqw_730d76de-628a-49ea-ad88-87a719e76750/extract-utilities/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.209569 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2phqw_730d76de-628a-49ea-ad88-87a719e76750/extract-content/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.238522 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-28ff6_f61fadad-2760-4a0f-8f1c-58598416d39a/marketplace-operator/1.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.240419 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-28ff6_f61fadad-2760-4a0f-8f1c-58598416d39a/marketplace-operator/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.382053 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vpz9t_87b35465-41de-46cd-acdb-53b8c6bace46/registry-server/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.386947 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vpz9t_87b35465-41de-46cd-acdb-53b8c6bace46/extract-utilities/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.394873 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vpz9t_87b35465-41de-46cd-acdb-53b8c6bace46/extract-content/0.log" Jan 21 16:47:44 crc kubenswrapper[4739]: I0121 16:47:44.116446 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf97s_37b1b410-e1bc-4ea1-88c0-d4ee6390214b/registry-server/0.log" Jan 21 16:47:44 crc kubenswrapper[4739]: I0121 16:47:44.121756 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf97s_37b1b410-e1bc-4ea1-88c0-d4ee6390214b/extract-utilities/0.log" Jan 21 16:47:44 crc kubenswrapper[4739]: I0121 16:47:44.131007 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf97s_37b1b410-e1bc-4ea1-88c0-d4ee6390214b/extract-content/0.log" Jan 21 16:48:05 crc kubenswrapper[4739]: I0121 16:48:05.222794 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:48:05 crc kubenswrapper[4739]: I0121 16:48:05.223638 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:48:35 crc kubenswrapper[4739]: I0121 16:48:35.222422 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:48:35 crc kubenswrapper[4739]: I0121 16:48:35.222878 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.222673 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.223231 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.223281 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.224462 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.224555 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c" gracePeriod=600 Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.709584 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c" exitCode=0 Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.709669 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c"} Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.709944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2"} Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.709977 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.225810 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/controller/0.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.232371 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/kube-rbac-proxy/0.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.256792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/controller/0.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.337328 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/1.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.395450 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/0.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.414918 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/1.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.420771 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/0.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.438366 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-74xhs_4ec8cb71-79f4-4c17-9519-94a7d2f5d25a/cert-manager-webhook/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.632453 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.828376 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/reloader/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.836376 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr-metrics/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.860911 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.877973 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy-frr/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.886963 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-frr-files/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.901302 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-reloader/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.908571 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-metrics/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.926289 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-sjv4j_df4966b4-eef0-46d7-a70b-f7108da36b36/frr-k8s-webhook-server/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.951578 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/1.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.970234 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.987018 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6994698-z27sp_ef7118ff-ea20-40ec-aa4d-5711926f4b6c/webhook-server/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.291132 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.381470 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.406561 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.487196 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/speaker/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.487779 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.505918 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/kube-rbac-proxy/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.509216 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.509695 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.530711 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/extract/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.538911 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/util/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.548261 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/pull/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.568196 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.609510 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.634588 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.639367 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.654305 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.659432 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.693262 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.965601 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.007777 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.007944 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.026028 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.063981 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.078219 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.113016 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.125335 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.153163 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.166748 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.204221 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.234092 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.309196 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.326105 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.326163 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.343374 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.343860 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.392239 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.538310 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.588384 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.896627 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.961846 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.974987 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.979583 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.989152 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-74xhs_4ec8cb71-79f4-4c17-9519-94a7d2f5d25a/cert-manager-webhook/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.038095 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.049699 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ggtdm_50c62dc2-9ca0-4c34-9043-e5a859e7d931/registry-server/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.058848 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-685vd_ef6a19dc-ef35-4ea2-9b8d-1d25c8903664/control-plane-machine-set-operator/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.073436 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.078980 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/kube-rbac-proxy/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.089931 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/machine-api-operator/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.112934 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.122178 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.134014 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.144760 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.155288 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.165672 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.166654 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.184077 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.230163 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.239554 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.241746 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.309974 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.311068 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.068849 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.110518 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.126578 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.165618 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.182801 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.183356 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.192771 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/extract/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.200864 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/util/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.209184 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/pull/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.233922 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.289008 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.303120 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.303628 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.312899 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.320179 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.353141 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.592506 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.608647 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:27 crc kubenswrapper[4739]: E0121 16:49:27.609378 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddda030-3df5-4c79-822b-6c027ffcebfd" containerName="container-00" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.609397 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddda030-3df5-4c79-822b-6c027ffcebfd" containerName="container-00" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.609678 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddda030-3df5-4c79-822b-6c027ffcebfd" containerName="container-00" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.611356 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.612949 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.613039 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.619374 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.645233 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.692228 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.772345 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49frz\" (UniqueName: \"kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.772453 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.772600 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.822047 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.824706 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.847640 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.875940 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49frz\" (UniqueName: \"kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.876022 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.876119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.878067 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.878335 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.890998 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.896193 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.923026 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.951556 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.964887 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.977346 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.977556 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxcdm\" (UniqueName: \"kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.977615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.019042 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.053077 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/1.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.078897 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.079415 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.080504 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxcdm\" (UniqueName: \"kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.082255 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.082718 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.117097 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.143945 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/1.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.148895 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.171467 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.183928 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/1.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.211197 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/1.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.318701 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49frz\" (UniqueName: \"kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.325499 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxcdm\" (UniqueName: \"kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.344541 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.388252 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/1.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.462132 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.560576 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7nprl_d1e5428b-c7db-4df9-8fad-fcfa89827ea4/nmstate-console-plugin/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.592369 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.599321 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-srg8z_9460d049-7edd-4e18-a153-2b0bc3218a8a/nmstate-handler/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.644010 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/nmstate-metrics/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.685450 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/kube-rbac-proxy/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.722868 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hrngk_61c58953-6280-4a68-858f-056eed7e5c65/nmstate-operator/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.742433 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-fdf2j_5812c445-156f-48d3-aa24-130b329cccfe/nmstate-webhook/0.log" Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.083829 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.209054 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.896869 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/0.log" Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.915408 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ggtdm_50c62dc2-9ca0-4c34-9043-e5a859e7d931/registry-server/0.log" Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.929176 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/1.log" Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.966699 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerID="5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3" exitCode=0 Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.966753 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerDied","Data":"5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3"} Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.966778 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerStarted","Data":"d881e7b2f6542202e05ac1ce06123f71718197389438795f38a883d504a2c4ab"} Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.968740 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.972772 4739 generic.go:334] "Generic (PLEG): container finished" podID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerID="a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9" exitCode=0 Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.972831 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerDied","Data":"a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9"} Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.972856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerStarted","Data":"26fe6d5d6a3094e45a8ae8d1bb1bb0f68452735c4a06caee1932351ff3bbc39d"} Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.008809 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.020645 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.047636 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.061932 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.077152 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.089870 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.090352 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.116948 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.174357 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.185022 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.186840 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.198084 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.200468 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.001224 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerID="52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777" exitCode=0 Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.001864 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerDied","Data":"52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777"} Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.010880 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerStarted","Data":"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e"} Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.606258 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/kube-multus-additional-cni-plugins/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.614893 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/egress-router-binary-copy/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.623395 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/cni-plugins/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.633875 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/bond-cni-plugin/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.644108 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/routeoverride-cni/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.657235 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/whereabouts-cni-bincopy/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.672545 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/whereabouts-cni/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.708417 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-wj45p_59bd4039-f143-418b-94d6-8fa9d3db77f5/multus-admission-controller/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.715495 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-wj45p_59bd4039-f143-418b-94d6-8fa9d3db77f5/kube-rbac-proxy/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.744441 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/2.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.829218 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/3.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.869375 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-mwzx6_b8521870-96a9-4db6-94b3-9f69336d280b/network-metrics-daemon/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.888974 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-mwzx6_b8521870-96a9-4db6-94b3-9f69336d280b/kube-rbac-proxy/0.log" Jan 21 16:49:33 crc kubenswrapper[4739]: I0121 16:49:33.040224 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerStarted","Data":"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5"} Jan 21 16:49:33 crc kubenswrapper[4739]: I0121 16:49:33.045624 4739 generic.go:334] "Generic (PLEG): container finished" podID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerID="d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e" exitCode=0 Jan 21 16:49:33 crc kubenswrapper[4739]: I0121 16:49:33.045669 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerDied","Data":"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e"} Jan 21 16:49:33 crc kubenswrapper[4739]: I0121 16:49:33.089993 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pg7sh" podStartSLOduration=3.663288993 podStartE2EDuration="6.08997455s" podCreationTimestamp="2026-01-21 16:49:27 +0000 UTC" firstStartedPulling="2026-01-21 16:49:29.968465668 +0000 UTC m=+5001.659171932" lastFinishedPulling="2026-01-21 16:49:32.395151225 +0000 UTC m=+5004.085857489" observedRunningTime="2026-01-21 16:49:33.084151882 +0000 UTC m=+5004.774858146" watchObservedRunningTime="2026-01-21 16:49:33.08997455 +0000 UTC m=+5004.780680814" Jan 21 16:49:35 crc kubenswrapper[4739]: I0121 16:49:35.063003 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerStarted","Data":"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648"} Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.462635 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.463131 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.559885 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.593321 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.593375 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.593388 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h4pts" podStartSLOduration=7.8786884 podStartE2EDuration="11.593373521s" podCreationTimestamp="2026-01-21 16:49:27 +0000 UTC" firstStartedPulling="2026-01-21 16:49:29.981961295 +0000 UTC m=+5001.672667559" lastFinishedPulling="2026-01-21 16:49:33.696646416 +0000 UTC m=+5005.387352680" observedRunningTime="2026-01-21 16:49:35.098050437 +0000 UTC m=+5006.788756701" watchObservedRunningTime="2026-01-21 16:49:38.593373521 +0000 UTC m=+5010.284079785" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.756465 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:39 crc kubenswrapper[4739]: I0121 16:49:39.650194 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:39 crc kubenswrapper[4739]: I0121 16:49:39.656752 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:40 crc kubenswrapper[4739]: I0121 16:49:40.994368 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.116482 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pg7sh" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="registry-server" containerID="cri-o://5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5" gracePeriod=2 Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.607711 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.739364 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49frz\" (UniqueName: \"kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz\") pod \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.740046 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content\") pod \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.740203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities\") pod \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.740893 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities" (OuterVolumeSpecName: "utilities") pod "a7272cf3-4249-4fb1-952e-85d1f82dfb98" (UID: "a7272cf3-4249-4fb1-952e-85d1f82dfb98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.741173 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.759530 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz" (OuterVolumeSpecName: "kube-api-access-49frz") pod "a7272cf3-4249-4fb1-952e-85d1f82dfb98" (UID: "a7272cf3-4249-4fb1-952e-85d1f82dfb98"). InnerVolumeSpecName "kube-api-access-49frz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.799046 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7272cf3-4249-4fb1-952e-85d1f82dfb98" (UID: "a7272cf3-4249-4fb1-952e-85d1f82dfb98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.843130 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49frz\" (UniqueName: \"kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.843379 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.002392 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.003157 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h4pts" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="registry-server" containerID="cri-o://ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648" gracePeriod=2 Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.133275 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerID="5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5" exitCode=0 Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.133331 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerDied","Data":"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5"} Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.133364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerDied","Data":"d881e7b2f6542202e05ac1ce06123f71718197389438795f38a883d504a2c4ab"} Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.133385 4739 scope.go:117] "RemoveContainer" containerID="5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.133427 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.189729 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.190331 4739 scope.go:117] "RemoveContainer" containerID="52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.199208 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.229447 4739 scope.go:117] "RemoveContainer" containerID="5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.307228 4739 scope.go:117] "RemoveContainer" containerID="5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5" Jan 21 16:49:42 crc kubenswrapper[4739]: E0121 16:49:42.316001 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5\": container with ID starting with 5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5 not found: ID does not exist" containerID="5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.316057 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5"} err="failed to get container status \"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5\": rpc error: code = NotFound desc = could not find container \"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5\": container with ID starting with 5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5 not found: ID does not exist" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.316081 4739 scope.go:117] "RemoveContainer" containerID="52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777" Jan 21 16:49:42 crc kubenswrapper[4739]: E0121 16:49:42.316603 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777\": container with ID starting with 52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777 not found: ID does not exist" containerID="52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.316621 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777"} err="failed to get container status \"52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777\": rpc error: code = NotFound desc = could not find container \"52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777\": container with ID starting with 52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777 not found: ID does not exist" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.316653 4739 scope.go:117] "RemoveContainer" containerID="5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3" Jan 21 16:49:42 crc kubenswrapper[4739]: E0121 16:49:42.316910 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3\": container with ID starting with 5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3 not found: ID does not exist" containerID="5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.316930 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3"} err="failed to get container status \"5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3\": rpc error: code = NotFound desc = could not find container \"5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3\": container with ID starting with 5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3 not found: ID does not exist" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.484731 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.559192 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxcdm\" (UniqueName: \"kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm\") pod \"802f8ce8-e6a3-4685-869a-c5d9720800a8\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.559497 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content\") pod \"802f8ce8-e6a3-4685-869a-c5d9720800a8\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.559535 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities\") pod \"802f8ce8-e6a3-4685-869a-c5d9720800a8\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.560197 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities" (OuterVolumeSpecName: "utilities") pod "802f8ce8-e6a3-4685-869a-c5d9720800a8" (UID: "802f8ce8-e6a3-4685-869a-c5d9720800a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.560529 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.566083 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm" (OuterVolumeSpecName: "kube-api-access-vxcdm") pod "802f8ce8-e6a3-4685-869a-c5d9720800a8" (UID: "802f8ce8-e6a3-4685-869a-c5d9720800a8"). InnerVolumeSpecName "kube-api-access-vxcdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.618206 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "802f8ce8-e6a3-4685-869a-c5d9720800a8" (UID: "802f8ce8-e6a3-4685-869a-c5d9720800a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.662883 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.662926 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxcdm\" (UniqueName: \"kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.796309 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" path="/var/lib/kubelet/pods/a7272cf3-4249-4fb1-952e-85d1f82dfb98/volumes" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.144640 4739 generic.go:334] "Generic (PLEG): container finished" podID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerID="ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648" exitCode=0 Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.144727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerDied","Data":"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648"} Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.144751 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerDied","Data":"26fe6d5d6a3094e45a8ae8d1bb1bb0f68452735c4a06caee1932351ff3bbc39d"} Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.144769 4739 scope.go:117] "RemoveContainer" containerID="ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.144933 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.179573 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.182102 4739 scope.go:117] "RemoveContainer" containerID="d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.189183 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.204460 4739 scope.go:117] "RemoveContainer" containerID="a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.238811 4739 scope.go:117] "RemoveContainer" containerID="ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648" Jan 21 16:49:43 crc kubenswrapper[4739]: E0121 16:49:43.239434 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648\": container with ID starting with ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648 not found: ID does not exist" containerID="ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.239479 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648"} err="failed to get container status \"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648\": rpc error: code = NotFound desc = could not find container \"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648\": container with ID starting with ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648 not found: ID does not exist" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.239508 4739 scope.go:117] "RemoveContainer" containerID="d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e" Jan 21 16:49:43 crc kubenswrapper[4739]: E0121 16:49:43.240110 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e\": container with ID starting with d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e not found: ID does not exist" containerID="d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.240129 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e"} err="failed to get container status \"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e\": rpc error: code = NotFound desc = could not find container \"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e\": container with ID starting with d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e not found: ID does not exist" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.240142 4739 scope.go:117] "RemoveContainer" containerID="a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9" Jan 21 16:49:43 crc kubenswrapper[4739]: E0121 16:49:43.240446 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9\": container with ID starting with a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9 not found: ID does not exist" containerID="a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.240496 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9"} err="failed to get container status \"a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9\": rpc error: code = NotFound desc = could not find container \"a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9\": container with ID starting with a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9 not found: ID does not exist" Jan 21 16:49:44 crc kubenswrapper[4739]: I0121 16:49:44.797852 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" path="/var/lib/kubelet/pods/802f8ce8-e6a3-4685-869a-c5d9720800a8/volumes" Jan 21 16:51:05 crc kubenswrapper[4739]: I0121 16:51:05.222431 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:51:05 crc kubenswrapper[4739]: I0121 16:51:05.222932 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:51:35 crc kubenswrapper[4739]: I0121 16:51:35.223224 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:51:35 crc kubenswrapper[4739]: I0121 16:51:35.223842 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.222525 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.223018 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.223069 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.223750 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.223888 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" gracePeriod=600 Jan 21 16:52:05 crc kubenswrapper[4739]: E0121 16:52:05.347200 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.522573 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" exitCode=0 Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.522622 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2"} Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.522724 4739 scope.go:117] "RemoveContainer" containerID="4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.523374 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:52:05 crc kubenswrapper[4739]: E0121 16:52:05.523662 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:52:07 crc kubenswrapper[4739]: I0121 16:52:07.302810 4739 scope.go:117] "RemoveContainer" containerID="b482f4f0ee416befc73bbab477f04ace5df7c6f8495cd9bc0d36f52f39201755" Jan 21 16:52:20 crc kubenswrapper[4739]: I0121 16:52:20.783853 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:52:20 crc kubenswrapper[4739]: E0121 16:52:20.784791 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.935545 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936724 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="extract-utilities" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936751 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="extract-utilities" Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936767 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="extract-content" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936775 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="extract-content" Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936791 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936801 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936891 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="extract-content" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936907 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="extract-content" Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936922 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936929 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936948 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="extract-utilities" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936959 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="extract-utilities" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.937242 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.937293 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.939356 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.980230 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.999151 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw4zm\" (UniqueName: \"kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.999309 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.999342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.101280 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.101334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.101478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw4zm\" (UniqueName: \"kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.101838 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.101975 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.126924 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw4zm\" (UniqueName: \"kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.267755 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.807968 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:33 crc kubenswrapper[4739]: I0121 16:52:33.782170 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerID="eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080" exitCode=0 Jan 21 16:52:33 crc kubenswrapper[4739]: I0121 16:52:33.782211 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerDied","Data":"eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080"} Jan 21 16:52:33 crc kubenswrapper[4739]: I0121 16:52:33.782435 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerStarted","Data":"445a9427920e98d18d71124e3eb091e41e77b8b357194c5fcc31e68f9e405505"} Jan 21 16:52:33 crc kubenswrapper[4739]: I0121 16:52:33.783183 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:52:33 crc kubenswrapper[4739]: E0121 16:52:33.783760 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:52:34 crc kubenswrapper[4739]: I0121 16:52:34.798743 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerStarted","Data":"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6"} Jan 21 16:52:39 crc kubenswrapper[4739]: I0121 16:52:39.846785 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerID="d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6" exitCode=0 Jan 21 16:52:39 crc kubenswrapper[4739]: I0121 16:52:39.846917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerDied","Data":"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6"} Jan 21 16:52:40 crc kubenswrapper[4739]: I0121 16:52:40.974078 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerStarted","Data":"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c"} Jan 21 16:52:41 crc kubenswrapper[4739]: I0121 16:52:41.001246 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8knc2" podStartSLOduration=3.368845121 podStartE2EDuration="10.001223223s" podCreationTimestamp="2026-01-21 16:52:31 +0000 UTC" firstStartedPulling="2026-01-21 16:52:33.785405821 +0000 UTC m=+5185.476112095" lastFinishedPulling="2026-01-21 16:52:40.417783933 +0000 UTC m=+5192.108490197" observedRunningTime="2026-01-21 16:52:40.993569325 +0000 UTC m=+5192.684275609" watchObservedRunningTime="2026-01-21 16:52:41.001223223 +0000 UTC m=+5192.691929497" Jan 21 16:52:42 crc kubenswrapper[4739]: I0121 16:52:42.269199 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:42 crc kubenswrapper[4739]: I0121 16:52:42.270165 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:43 crc kubenswrapper[4739]: I0121 16:52:43.326016 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8knc2" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="registry-server" probeResult="failure" output=< Jan 21 16:52:43 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:52:43 crc kubenswrapper[4739]: > Jan 21 16:52:44 crc kubenswrapper[4739]: I0121 16:52:44.783756 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:52:44 crc kubenswrapper[4739]: E0121 16:52:44.784426 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:52:52 crc kubenswrapper[4739]: I0121 16:52:52.316203 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:52 crc kubenswrapper[4739]: I0121 16:52:52.379829 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:52 crc kubenswrapper[4739]: I0121 16:52:52.573400 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:54 crc kubenswrapper[4739]: I0121 16:52:54.073281 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8knc2" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="registry-server" containerID="cri-o://e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c" gracePeriod=2 Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.037388 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.123743 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerID="e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c" exitCode=0 Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.123800 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerDied","Data":"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c"} Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.123807 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.123847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerDied","Data":"445a9427920e98d18d71124e3eb091e41e77b8b357194c5fcc31e68f9e405505"} Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.123865 4739 scope.go:117] "RemoveContainer" containerID="e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.173324 4739 scope.go:117] "RemoveContainer" containerID="d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.204960 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw4zm\" (UniqueName: \"kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm\") pod \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.205057 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities\") pod \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.205131 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content\") pod \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.208731 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities" (OuterVolumeSpecName: "utilities") pod "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" (UID: "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.217096 4739 scope.go:117] "RemoveContainer" containerID="eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.217315 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm" (OuterVolumeSpecName: "kube-api-access-fw4zm") pod "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" (UID: "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020"). InnerVolumeSpecName "kube-api-access-fw4zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.300700 4739 scope.go:117] "RemoveContainer" containerID="e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c" Jan 21 16:52:55 crc kubenswrapper[4739]: E0121 16:52:55.302970 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c\": container with ID starting with e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c not found: ID does not exist" containerID="e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.303072 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c"} err="failed to get container status \"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c\": rpc error: code = NotFound desc = could not find container \"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c\": container with ID starting with e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c not found: ID does not exist" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.303176 4739 scope.go:117] "RemoveContainer" containerID="d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6" Jan 21 16:52:55 crc kubenswrapper[4739]: E0121 16:52:55.303469 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6\": container with ID starting with d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6 not found: ID does not exist" containerID="d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.303491 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6"} err="failed to get container status \"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6\": rpc error: code = NotFound desc = could not find container \"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6\": container with ID starting with d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6 not found: ID does not exist" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.303507 4739 scope.go:117] "RemoveContainer" containerID="eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080" Jan 21 16:52:55 crc kubenswrapper[4739]: E0121 16:52:55.303712 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080\": container with ID starting with eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080 not found: ID does not exist" containerID="eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.303785 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080"} err="failed to get container status \"eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080\": rpc error: code = NotFound desc = could not find container \"eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080\": container with ID starting with eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080 not found: ID does not exist" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.307184 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw4zm\" (UniqueName: \"kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm\") on node \"crc\" DevicePath \"\"" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.307205 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.346788 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" (UID: "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.409223 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.469000 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.479587 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:56 crc kubenswrapper[4739]: I0121 16:52:56.795700 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" path="/var/lib/kubelet/pods/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020/volumes" Jan 21 16:52:57 crc kubenswrapper[4739]: I0121 16:52:57.782495 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:52:57 crc kubenswrapper[4739]: E0121 16:52:57.782926 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:53:07 crc kubenswrapper[4739]: I0121 16:53:07.401262 4739 scope.go:117] "RemoveContainer" containerID="6fa029964a57617bab2baa300f1c6608b6ef09e3f74d48cead0cc6f18c017d8b" Jan 21 16:53:08 crc kubenswrapper[4739]: I0121 16:53:08.793064 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:53:08 crc kubenswrapper[4739]: E0121 16:53:08.793595 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.137808 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:16 crc kubenswrapper[4739]: E0121 16:53:16.138758 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="registry-server" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.138776 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="registry-server" Jan 21 16:53:16 crc kubenswrapper[4739]: E0121 16:53:16.138798 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="extract-utilities" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.138806 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="extract-utilities" Jan 21 16:53:16 crc kubenswrapper[4739]: E0121 16:53:16.138856 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="extract-content" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.138866 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="extract-content" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.139054 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="registry-server" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.140619 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.162919 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.335724 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.335807 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6snh6\" (UniqueName: \"kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.336015 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.438012 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.438180 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.438204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6snh6\" (UniqueName: \"kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.438719 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.439071 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.459382 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6snh6\" (UniqueName: \"kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.475135 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.843037 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:17 crc kubenswrapper[4739]: I0121 16:53:17.319194 4739 generic.go:334] "Generic (PLEG): container finished" podID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerID="a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff" exitCode=0 Jan 21 16:53:17 crc kubenswrapper[4739]: I0121 16:53:17.319487 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerDied","Data":"a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff"} Jan 21 16:53:17 crc kubenswrapper[4739]: I0121 16:53:17.319518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerStarted","Data":"0968e949d64d54ada8bff648a1c163fce0610703e36c6c822beff6d7773398be"} Jan 21 16:53:19 crc kubenswrapper[4739]: I0121 16:53:19.340068 4739 generic.go:334] "Generic (PLEG): container finished" podID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerID="7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3" exitCode=0 Jan 21 16:53:19 crc kubenswrapper[4739]: I0121 16:53:19.340129 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerDied","Data":"7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3"} Jan 21 16:53:20 crc kubenswrapper[4739]: I0121 16:53:20.350056 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerStarted","Data":"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd"} Jan 21 16:53:20 crc kubenswrapper[4739]: I0121 16:53:20.422552 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jbmld" podStartSLOduration=1.955467442 podStartE2EDuration="4.422532165s" podCreationTimestamp="2026-01-21 16:53:16 +0000 UTC" firstStartedPulling="2026-01-21 16:53:17.321085787 +0000 UTC m=+5229.011792071" lastFinishedPulling="2026-01-21 16:53:19.78815053 +0000 UTC m=+5231.478856794" observedRunningTime="2026-01-21 16:53:20.380253726 +0000 UTC m=+5232.070960000" watchObservedRunningTime="2026-01-21 16:53:20.422532165 +0000 UTC m=+5232.113238429" Jan 21 16:53:20 crc kubenswrapper[4739]: I0121 16:53:20.785645 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:53:20 crc kubenswrapper[4739]: E0121 16:53:20.785999 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:53:26 crc kubenswrapper[4739]: I0121 16:53:26.476073 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:26 crc kubenswrapper[4739]: I0121 16:53:26.477455 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:26 crc kubenswrapper[4739]: I0121 16:53:26.527907 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:27 crc kubenswrapper[4739]: I0121 16:53:27.484298 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:27 crc kubenswrapper[4739]: I0121 16:53:27.544528 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:29 crc kubenswrapper[4739]: I0121 16:53:29.439884 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jbmld" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="registry-server" containerID="cri-o://3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd" gracePeriod=2 Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.441598 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.452075 4739 generic.go:334] "Generic (PLEG): container finished" podID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerID="3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd" exitCode=0 Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.452139 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerDied","Data":"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd"} Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.452182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerDied","Data":"0968e949d64d54ada8bff648a1c163fce0610703e36c6c822beff6d7773398be"} Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.452205 4739 scope.go:117] "RemoveContainer" containerID="3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.452421 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.499892 4739 scope.go:117] "RemoveContainer" containerID="7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.524394 4739 scope.go:117] "RemoveContainer" containerID="a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.527939 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities\") pod \"b8bffeba-7066-47d6-b3a0-b26636b59417\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.528023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6snh6\" (UniqueName: \"kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6\") pod \"b8bffeba-7066-47d6-b3a0-b26636b59417\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.528158 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content\") pod \"b8bffeba-7066-47d6-b3a0-b26636b59417\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.529002 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities" (OuterVolumeSpecName: "utilities") pod "b8bffeba-7066-47d6-b3a0-b26636b59417" (UID: "b8bffeba-7066-47d6-b3a0-b26636b59417"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.534628 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6" (OuterVolumeSpecName: "kube-api-access-6snh6") pod "b8bffeba-7066-47d6-b3a0-b26636b59417" (UID: "b8bffeba-7066-47d6-b3a0-b26636b59417"). InnerVolumeSpecName "kube-api-access-6snh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.553694 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8bffeba-7066-47d6-b3a0-b26636b59417" (UID: "b8bffeba-7066-47d6-b3a0-b26636b59417"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.630150 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6snh6\" (UniqueName: \"kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.630394 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.630453 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.630946 4739 scope.go:117] "RemoveContainer" containerID="3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd" Jan 21 16:53:30 crc kubenswrapper[4739]: E0121 16:53:30.631330 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd\": container with ID starting with 3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd not found: ID does not exist" containerID="3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.631427 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd"} err="failed to get container status \"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd\": rpc error: code = NotFound desc = could not find container \"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd\": container with ID starting with 3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd not found: ID does not exist" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.631506 4739 scope.go:117] "RemoveContainer" containerID="7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3" Jan 21 16:53:30 crc kubenswrapper[4739]: E0121 16:53:30.631924 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3\": container with ID starting with 7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3 not found: ID does not exist" containerID="7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.631970 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3"} err="failed to get container status \"7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3\": rpc error: code = NotFound desc = could not find container \"7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3\": container with ID starting with 7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3 not found: ID does not exist" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.632018 4739 scope.go:117] "RemoveContainer" containerID="a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff" Jan 21 16:53:30 crc kubenswrapper[4739]: E0121 16:53:30.632389 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff\": container with ID starting with a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff not found: ID does not exist" containerID="a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.632463 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff"} err="failed to get container status \"a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff\": rpc error: code = NotFound desc = could not find container \"a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff\": container with ID starting with a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff not found: ID does not exist" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.801871 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.810860 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:31 crc kubenswrapper[4739]: I0121 16:53:31.783308 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:53:31 crc kubenswrapper[4739]: E0121 16:53:31.783824 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:53:32 crc kubenswrapper[4739]: I0121 16:53:32.794027 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" path="/var/lib/kubelet/pods/b8bffeba-7066-47d6-b3a0-b26636b59417/volumes" Jan 21 16:53:45 crc kubenswrapper[4739]: I0121 16:53:45.782970 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:53:45 crc kubenswrapper[4739]: E0121 16:53:45.783735 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:53:59 crc kubenswrapper[4739]: I0121 16:53:59.783732 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:53:59 crc kubenswrapper[4739]: E0121 16:53:59.784508 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:54:14 crc kubenswrapper[4739]: I0121 16:54:14.783581 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:54:14 crc kubenswrapper[4739]: E0121 16:54:14.784341 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:54:25 crc kubenswrapper[4739]: I0121 16:54:25.783840 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:54:25 crc kubenswrapper[4739]: E0121 16:54:25.784556 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:54:38 crc kubenswrapper[4739]: I0121 16:54:38.789388 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:54:38 crc kubenswrapper[4739]: E0121 16:54:38.791601 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:54:53 crc kubenswrapper[4739]: I0121 16:54:53.784632 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:54:53 crc kubenswrapper[4739]: E0121 16:54:53.785287 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:55:06 crc kubenswrapper[4739]: I0121 16:55:06.783336 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:55:06 crc kubenswrapper[4739]: E0121 16:55:06.784281 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:55:18 crc kubenswrapper[4739]: I0121 16:55:18.793657 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:55:18 crc kubenswrapper[4739]: E0121 16:55:18.794416 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:55:30 crc kubenswrapper[4739]: I0121 16:55:30.785342 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:55:30 crc kubenswrapper[4739]: E0121 16:55:30.786242 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:55:43 crc kubenswrapper[4739]: I0121 16:55:43.783408 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:55:43 crc kubenswrapper[4739]: E0121 16:55:43.784268 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:55:54 crc kubenswrapper[4739]: I0121 16:55:54.784110 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:55:54 crc kubenswrapper[4739]: E0121 16:55:54.784873 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:56:09 crc kubenswrapper[4739]: I0121 16:56:09.785157 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:56:09 crc kubenswrapper[4739]: E0121 16:56:09.785965 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:56:22 crc kubenswrapper[4739]: I0121 16:56:22.782665 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:56:22 crc kubenswrapper[4739]: E0121 16:56:22.783489 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:56:34 crc kubenswrapper[4739]: I0121 16:56:34.783291 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:56:34 crc kubenswrapper[4739]: E0121 16:56:34.784008 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:56:45 crc kubenswrapper[4739]: I0121 16:56:45.783353 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:56:45 crc kubenswrapper[4739]: E0121 16:56:45.784373 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:56:59 crc kubenswrapper[4739]: I0121 16:56:59.784484 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:56:59 crc kubenswrapper[4739]: E0121 16:56:59.785278 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:57:01 crc kubenswrapper[4739]: I0121 16:57:01.451936 4739 generic.go:334] "Generic (PLEG): container finished" podID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerID="70e793ae70ed3be2165a96f46f92591284c1b2cb4d56ab3f9a4e3281cd832392" exitCode=0 Jan 21 16:57:01 crc kubenswrapper[4739]: I0121 16:57:01.452051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/must-gather-smrdj" event={"ID":"4a63aa7f-39ab-48de-bb46-86db1661dfbf","Type":"ContainerDied","Data":"70e793ae70ed3be2165a96f46f92591284c1b2cb4d56ab3f9a4e3281cd832392"} Jan 21 16:57:01 crc kubenswrapper[4739]: I0121 16:57:01.453054 4739 scope.go:117] "RemoveContainer" containerID="70e793ae70ed3be2165a96f46f92591284c1b2cb4d56ab3f9a4e3281cd832392" Jan 21 16:57:01 crc kubenswrapper[4739]: I0121 16:57:01.526156 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gd2st_must-gather-smrdj_4a63aa7f-39ab-48de-bb46-86db1661dfbf/gather/0.log" Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.344612 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gd2st/must-gather-smrdj"] Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.345452 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gd2st/must-gather-smrdj" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="copy" containerID="cri-o://107eef26237f35c1f5bab979a158fce91b0e43c8e7ed5137b7cd6ddc1422aa41" gracePeriod=2 Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.366476 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gd2st/must-gather-smrdj"] Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.560148 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gd2st_must-gather-smrdj_4a63aa7f-39ab-48de-bb46-86db1661dfbf/copy/0.log" Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.560827 4739 generic.go:334] "Generic (PLEG): container finished" podID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerID="107eef26237f35c1f5bab979a158fce91b0e43c8e7ed5137b7cd6ddc1422aa41" exitCode=143 Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.854606 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gd2st_must-gather-smrdj_4a63aa7f-39ab-48de-bb46-86db1661dfbf/copy/0.log" Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.855022 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.859609 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgq7l\" (UniqueName: \"kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l\") pod \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.859720 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output\") pod \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.865885 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l" (OuterVolumeSpecName: "kube-api-access-rgq7l") pod "4a63aa7f-39ab-48de-bb46-86db1661dfbf" (UID: "4a63aa7f-39ab-48de-bb46-86db1661dfbf"). InnerVolumeSpecName "kube-api-access-rgq7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.963795 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgq7l\" (UniqueName: \"kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.109858 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4a63aa7f-39ab-48de-bb46-86db1661dfbf" (UID: "4a63aa7f-39ab-48de-bb46-86db1661dfbf"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.167393 4739 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.570734 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gd2st_must-gather-smrdj_4a63aa7f-39ab-48de-bb46-86db1661dfbf/copy/0.log" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.571235 4739 scope.go:117] "RemoveContainer" containerID="107eef26237f35c1f5bab979a158fce91b0e43c8e7ed5137b7cd6ddc1422aa41" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.571313 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.605573 4739 scope.go:117] "RemoveContainer" containerID="70e793ae70ed3be2165a96f46f92591284c1b2cb4d56ab3f9a4e3281cd832392" Jan 21 16:57:12 crc kubenswrapper[4739]: I0121 16:57:12.784961 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:57:12 crc kubenswrapper[4739]: I0121 16:57:12.793694 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" path="/var/lib/kubelet/pods/4a63aa7f-39ab-48de-bb46-86db1661dfbf/volumes" Jan 21 16:57:13 crc kubenswrapper[4739]: I0121 16:57:13.603210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"7e3ca86560868d371160281702114be8de7374b79de0dc1901b4688ad6193471"} Jan 21 16:59:35 crc kubenswrapper[4739]: I0121 16:59:35.222630 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:59:35 crc kubenswrapper[4739]: I0121 16:59:35.223364 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.860436 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 16:59:53 crc kubenswrapper[4739]: E0121 16:59:53.861503 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="extract-content" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861523 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="extract-content" Jan 21 16:59:53 crc kubenswrapper[4739]: E0121 16:59:53.861542 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="extract-utilities" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861549 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="extract-utilities" Jan 21 16:59:53 crc kubenswrapper[4739]: E0121 16:59:53.861566 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="gather" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861573 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="gather" Jan 21 16:59:53 crc kubenswrapper[4739]: E0121 16:59:53.861591 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="copy" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861597 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="copy" Jan 21 16:59:53 crc kubenswrapper[4739]: E0121 16:59:53.861619 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="registry-server" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861626 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="registry-server" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861887 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="gather" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861906 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="registry-server" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861919 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="copy" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.863627 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.884337 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.047057 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.047116 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74d4\" (UniqueName: \"kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.047618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.149644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.149716 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.149750 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q74d4\" (UniqueName: \"kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.150524 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.150646 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.183485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q74d4\" (UniqueName: \"kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.202684 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.821968 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 16:59:55 crc kubenswrapper[4739]: I0121 16:59:55.195664 4739 generic.go:334] "Generic (PLEG): container finished" podID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerID="1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e" exitCode=0 Jan 21 16:59:55 crc kubenswrapper[4739]: I0121 16:59:55.195726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerDied","Data":"1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e"} Jan 21 16:59:55 crc kubenswrapper[4739]: I0121 16:59:55.195976 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerStarted","Data":"dd53645d128655f67d307e0096c871be93fdeff6e9d4964f1091ff8ff5c2f750"} Jan 21 16:59:55 crc kubenswrapper[4739]: I0121 16:59:55.205448 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:59:57 crc kubenswrapper[4739]: I0121 16:59:57.216918 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerStarted","Data":"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932"} Jan 21 16:59:58 crc kubenswrapper[4739]: I0121 16:59:58.226182 4739 generic.go:334] "Generic (PLEG): container finished" podID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerID="67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932" exitCode=0 Jan 21 16:59:58 crc kubenswrapper[4739]: I0121 16:59:58.226250 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerDied","Data":"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932"} Jan 21 16:59:59 crc kubenswrapper[4739]: I0121 16:59:59.237029 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerStarted","Data":"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124"} Jan 21 16:59:59 crc kubenswrapper[4739]: I0121 16:59:59.264383 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6x96d" podStartSLOduration=2.81535562 podStartE2EDuration="6.264365455s" podCreationTimestamp="2026-01-21 16:59:53 +0000 UTC" firstStartedPulling="2026-01-21 16:59:55.205240301 +0000 UTC m=+5626.895946575" lastFinishedPulling="2026-01-21 16:59:58.654250156 +0000 UTC m=+5630.344956410" observedRunningTime="2026-01-21 16:59:59.253852439 +0000 UTC m=+5630.944558703" watchObservedRunningTime="2026-01-21 16:59:59.264365455 +0000 UTC m=+5630.955071709" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.155875 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6"] Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.157611 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.160593 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.160878 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.187512 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.187973 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.188031 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdf8s\" (UniqueName: \"kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.189396 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6"] Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.289693 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.289802 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.289863 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdf8s\" (UniqueName: \"kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.291660 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.297427 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.311743 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdf8s\" (UniqueName: \"kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.488476 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:01 crc kubenswrapper[4739]: I0121 17:00:01.084963 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6"] Jan 21 17:00:01 crc kubenswrapper[4739]: W0121 17:00:01.093954 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c033dec_eba2_4ba9_ae56_1858f0b67d72.slice/crio-7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087 WatchSource:0}: Error finding container 7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087: Status 404 returned error can't find the container with id 7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087 Jan 21 17:00:01 crc kubenswrapper[4739]: I0121 17:00:01.258934 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" event={"ID":"5c033dec-eba2-4ba9-ae56-1858f0b67d72","Type":"ContainerStarted","Data":"e4c54b2dcbd47dcc7a55e5df2dc33a0b4da88339706e1a993223c98c42901583"} Jan 21 17:00:01 crc kubenswrapper[4739]: I0121 17:00:01.259260 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" event={"ID":"5c033dec-eba2-4ba9-ae56-1858f0b67d72","Type":"ContainerStarted","Data":"7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087"} Jan 21 17:00:01 crc kubenswrapper[4739]: I0121 17:00:01.279529 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" podStartSLOduration=1.2795044199999999 podStartE2EDuration="1.27950442s" podCreationTimestamp="2026-01-21 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:00:01.274105533 +0000 UTC m=+5632.964811807" watchObservedRunningTime="2026-01-21 17:00:01.27950442 +0000 UTC m=+5632.970210684" Jan 21 17:00:02 crc kubenswrapper[4739]: I0121 17:00:02.272114 4739 generic.go:334] "Generic (PLEG): container finished" podID="5c033dec-eba2-4ba9-ae56-1858f0b67d72" containerID="e4c54b2dcbd47dcc7a55e5df2dc33a0b4da88339706e1a993223c98c42901583" exitCode=0 Jan 21 17:00:02 crc kubenswrapper[4739]: I0121 17:00:02.272168 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" event={"ID":"5c033dec-eba2-4ba9-ae56-1858f0b67d72","Type":"ContainerDied","Data":"e4c54b2dcbd47dcc7a55e5df2dc33a0b4da88339706e1a993223c98c42901583"} Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.653737 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.759088 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume\") pod \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.759178 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdf8s\" (UniqueName: \"kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s\") pod \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.759212 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume\") pod \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.759928 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume" (OuterVolumeSpecName: "config-volume") pod "5c033dec-eba2-4ba9-ae56-1858f0b67d72" (UID: "5c033dec-eba2-4ba9-ae56-1858f0b67d72"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.765710 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s" (OuterVolumeSpecName: "kube-api-access-xdf8s") pod "5c033dec-eba2-4ba9-ae56-1858f0b67d72" (UID: "5c033dec-eba2-4ba9-ae56-1858f0b67d72"). InnerVolumeSpecName "kube-api-access-xdf8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.768249 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5c033dec-eba2-4ba9-ae56-1858f0b67d72" (UID: "5c033dec-eba2-4ba9-ae56-1858f0b67d72"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.861877 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdf8s\" (UniqueName: \"kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.861937 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.861947 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.203231 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.203524 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.257565 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.292854 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.294060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" event={"ID":"5c033dec-eba2-4ba9-ae56-1858f0b67d72","Type":"ContainerDied","Data":"7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087"} Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.294152 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.357398 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5"] Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.369524 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5"] Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.369908 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.499413 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.794676 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500844a7-398c-49ff-ab43-ee0502f1c576" path="/var/lib/kubelet/pods/500844a7-398c-49ff-ab43-ee0502f1c576/volumes" Jan 21 17:00:05 crc kubenswrapper[4739]: I0121 17:00:05.223270 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:00:05 crc kubenswrapper[4739]: I0121 17:00:05.223334 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.308207 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6x96d" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="registry-server" containerID="cri-o://1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124" gracePeriod=2 Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.835445 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.932534 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content\") pod \"74b9ab6f-276d-46ce-a141-1074064bbf3a\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.932613 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities\") pod \"74b9ab6f-276d-46ce-a141-1074064bbf3a\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.932713 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q74d4\" (UniqueName: \"kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4\") pod \"74b9ab6f-276d-46ce-a141-1074064bbf3a\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.934709 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities" (OuterVolumeSpecName: "utilities") pod "74b9ab6f-276d-46ce-a141-1074064bbf3a" (UID: "74b9ab6f-276d-46ce-a141-1074064bbf3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.965379 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4" (OuterVolumeSpecName: "kube-api-access-q74d4") pod "74b9ab6f-276d-46ce-a141-1074064bbf3a" (UID: "74b9ab6f-276d-46ce-a141-1074064bbf3a"). InnerVolumeSpecName "kube-api-access-q74d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.986254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74b9ab6f-276d-46ce-a141-1074064bbf3a" (UID: "74b9ab6f-276d-46ce-a141-1074064bbf3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.035533 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.035575 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.035589 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q74d4\" (UniqueName: \"kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.338577 4739 generic.go:334] "Generic (PLEG): container finished" podID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerID="1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124" exitCode=0 Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.338629 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerDied","Data":"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124"} Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.338992 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerDied","Data":"dd53645d128655f67d307e0096c871be93fdeff6e9d4964f1091ff8ff5c2f750"} Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.339022 4739 scope.go:117] "RemoveContainer" containerID="1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.338719 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.364635 4739 scope.go:117] "RemoveContainer" containerID="67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.392715 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.396568 4739 scope.go:117] "RemoveContainer" containerID="1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.407418 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.469502 4739 scope.go:117] "RemoveContainer" containerID="1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124" Jan 21 17:00:07 crc kubenswrapper[4739]: E0121 17:00:07.470620 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124\": container with ID starting with 1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124 not found: ID does not exist" containerID="1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.470677 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124"} err="failed to get container status \"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124\": rpc error: code = NotFound desc = could not find container \"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124\": container with ID starting with 1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124 not found: ID does not exist" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.470715 4739 scope.go:117] "RemoveContainer" containerID="67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932" Jan 21 17:00:07 crc kubenswrapper[4739]: E0121 17:00:07.471042 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932\": container with ID starting with 67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932 not found: ID does not exist" containerID="67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.471072 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932"} err="failed to get container status \"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932\": rpc error: code = NotFound desc = could not find container \"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932\": container with ID starting with 67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932 not found: ID does not exist" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.471094 4739 scope.go:117] "RemoveContainer" containerID="1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e" Jan 21 17:00:07 crc kubenswrapper[4739]: E0121 17:00:07.471761 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e\": container with ID starting with 1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e not found: ID does not exist" containerID="1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.471804 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e"} err="failed to get container status \"1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e\": rpc error: code = NotFound desc = could not find container \"1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e\": container with ID starting with 1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e not found: ID does not exist" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.939186 4739 scope.go:117] "RemoveContainer" containerID="9e8058f7eec039e4c3259b5efc1ab1e60d67bb50c456dee5d157611618a29b3d" Jan 21 17:00:08 crc kubenswrapper[4739]: I0121 17:00:08.801015 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" path="/var/lib/kubelet/pods/74b9ab6f-276d-46ce-a141-1074064bbf3a/volumes" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.517467 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6nt8t"] Jan 21 17:00:23 crc kubenswrapper[4739]: E0121 17:00:23.518469 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="extract-utilities" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518484 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="extract-utilities" Jan 21 17:00:23 crc kubenswrapper[4739]: E0121 17:00:23.518497 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="registry-server" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518506 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="registry-server" Jan 21 17:00:23 crc kubenswrapper[4739]: E0121 17:00:23.518522 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="extract-content" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518531 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="extract-content" Jan 21 17:00:23 crc kubenswrapper[4739]: E0121 17:00:23.518560 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c033dec-eba2-4ba9-ae56-1858f0b67d72" containerName="collect-profiles" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518568 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c033dec-eba2-4ba9-ae56-1858f0b67d72" containerName="collect-profiles" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518809 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="registry-server" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518852 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c033dec-eba2-4ba9-ae56-1858f0b67d72" containerName="collect-profiles" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.520517 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.528842 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6nt8t"] Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.571053 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-utilities\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.571250 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-catalog-content\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.571445 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66xx8\" (UniqueName: \"kubernetes.io/projected/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-kube-api-access-66xx8\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.673513 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-catalog-content\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.673581 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66xx8\" (UniqueName: \"kubernetes.io/projected/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-kube-api-access-66xx8\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.673697 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-utilities\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.674071 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-catalog-content\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.674138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-utilities\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.694808 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66xx8\" (UniqueName: \"kubernetes.io/projected/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-kube-api-access-66xx8\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.837250 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:24 crc kubenswrapper[4739]: I0121 17:00:24.420001 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6nt8t"] Jan 21 17:00:24 crc kubenswrapper[4739]: I0121 17:00:24.504475 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nt8t" event={"ID":"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a","Type":"ContainerStarted","Data":"9bacf198aa3b44a7e8ba63f2404eefc11f31a0fb4aa8b5ef9fbe54e2a3468d3e"} Jan 21 17:00:25 crc kubenswrapper[4739]: I0121 17:00:25.515620 4739 generic.go:334] "Generic (PLEG): container finished" podID="7acbaf76-6be9-4b64-8845-f81a5d6fbd4a" containerID="116c3e3cd8c5d6eaeb4a523c4c7cb59e3785e0aa6448b9ea877905cd0f3daaee" exitCode=0 Jan 21 17:00:25 crc kubenswrapper[4739]: I0121 17:00:25.515755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nt8t" event={"ID":"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a","Type":"ContainerDied","Data":"116c3e3cd8c5d6eaeb4a523c4c7cb59e3785e0aa6448b9ea877905cd0f3daaee"} Jan 21 17:00:27 crc kubenswrapper[4739]: I0121 17:00:27.536311 4739 generic.go:334] "Generic (PLEG): container finished" podID="7acbaf76-6be9-4b64-8845-f81a5d6fbd4a" containerID="da70bf8240f742cd7155a7644d2cf432872f521e427bbd79fe760a6f7d383756" exitCode=0 Jan 21 17:00:27 crc kubenswrapper[4739]: I0121 17:00:27.536396 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nt8t" event={"ID":"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a","Type":"ContainerDied","Data":"da70bf8240f742cd7155a7644d2cf432872f521e427bbd79fe760a6f7d383756"} Jan 21 17:00:29 crc kubenswrapper[4739]: I0121 17:00:29.555073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nt8t" event={"ID":"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a","Type":"ContainerStarted","Data":"8a16edb50a6a9fe661a2251ec894f217ac9af0111473e0463ef2c28791e0356c"} Jan 21 17:00:29 crc kubenswrapper[4739]: I0121 17:00:29.616435 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6nt8t" podStartSLOduration=3.82367114 podStartE2EDuration="6.616403547s" podCreationTimestamp="2026-01-21 17:00:23 +0000 UTC" firstStartedPulling="2026-01-21 17:00:25.518292208 +0000 UTC m=+5657.208998472" lastFinishedPulling="2026-01-21 17:00:28.311024615 +0000 UTC m=+5660.001730879" observedRunningTime="2026-01-21 17:00:29.569598147 +0000 UTC m=+5661.260304421" watchObservedRunningTime="2026-01-21 17:00:29.616403547 +0000 UTC m=+5661.307109801" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134203073024443 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134203073017360 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134167370016514 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134167370015464 5ustar corecore